The following sections contain summaries and notes of the ideas and concepts discussed during the face-to-face sessions of the Block Two training, but they do not replicate them and cannot replace them. In addition to the information found here, your facilitator(s) should provide you with any presentations or materials related to in-class exercises. During the training, presentations will be interspersed with practical exercises and working sessions during which you will develop and present sections of your M&E Framework.
Session 1: Introduction to Monitoring and Evaluation: Rationale & Concepts
In this session, you will develop your understanding of what M&E is and why it is relevant to research capacity building and discuss the importance of establishing clear goals and objectives for your activities.
Learning objectives
By the end of this session, you will be able to:
1. Define “monitoring” and “evaluation” and compare definitions;
2. Explain the importance of M&E for research capacity building activities; and
3. Establish goals and objectives for research capacity building activities.
What is “monitoring”? What is “evaluation”?
As we mentioned in Block One, it is important to distinguish between monitoring and evaluation. In practice, most of what we do in the ambit of research capacity building is to monitor the outputs and/or short-term outcomes of training programmes. Evaluation assesses the potential attribution of outcomes and/or impact (either in whole or in part) to the programme or activity we are carrying out.
Many M&E experts maintain a strict distinction between monitoring and evaluation. One of our first tasks is to define these basic concepts, so that we all speak the same language.
Monitoring is an ongoing process of data collection. It involves counting what we are doing and tracking changes in programme performance over time. Monitoring is sometimes referred to as “process evaluation.” It addresses the following questions:
• Are activities carried out as planned?
• Are resources used as expected?
• What services are provided, to whom, when, how often, for how long,
and in what context?
• Are the activities accessible?
• Is the quality of the activities adequate?
• Is the target population being reached?
Evaluation is the use of social research tools to systematically assess how well activities have met (or not met) expected objectives, and if they have had unintended consequences (either positive or negative). Evaluation requires a specific study design. It sometimes requires a control or comparison group, but this is not always feasible when we evaluate capacity building activities.
Evaluation addresses the following questions:
• Does the programme or activity generate change?
• What change can be observed and measured?
• To what extent is the programme or activity responsible for observed changes?
Monitoring and evaluation both require knowledge of baseline and final values, sometimes with an interim measurement during the project. However, evaluation differs from monitoring in that its goal is to determine how much of the observed change in outcome is due to the programme or activity. It is an analytical exercise to help you and others understand when, how, and to what extent an activity is responsible for measured impact. It is important to note, however, that relatively few evaluations can go as far as to establish a cause-and-effect between the activity and the change. We will address this issue in more depth further on.
Why is M&E important?
Many of us have participated in M&E activities because we have been required to do so, by funders, by administrators, or in service of other reporting requirements. It may be the case that we have found ourselves compelled to move through M&E processes that have not been very fruitful, beyond ticking a necessary box. When done thoughtfully, thoroughly, and well, M&E processes should:
1. Identify, gather, and analyse information that makes our work more effective.
2. Identify, gather, and analyse information that makes our work more efficient.
3. Support transparent and useful communication with our partners.
4. Help us to compare what we planned with what happens.
5. Nourish individual and institutional learning processes, engage teams in enriching debates and conversations, and result in positive change.
Goals and objectives
For your research capacity building activity to be successful, you need to be able to clearly answer two deceptively simple questions: “Why am I doing this?” – what is the goal to which this activity is contributing? and “What am I doing this for?” – what results do I want to achieve?
Setting clear goals and objectives for your work is important because it helps you, and others, to build or contribute to your activity or programme, to collect and analyse information to understand to what degree it is successful, and to develop ideas for how it can be improved. If you understand why and for what you are undertaking your research capacity building activity, then you will be able to help others to understand as well. In general, it is necessary to involve a diverse group of people to develop and implement effective M&E systems. It is hard to be successful in these efforts without involving a team of people, who understand, and hopefully share, our goals and objectives.
Throughout this workshop, we look at our activities, outcomes, and impact at the institutional level. That is, we consider that both the activity targets (the people who participate in the activity) and the beneficiaries of our activities (the population where we will measure impact) to be within our institutions and we look at goals as being institutional level goals. This way of framing the M&E process facilitates developing frameworks for research capacity building activities, but it is a choice. We could easily define our activity targets, beneficiaries, goals, and objectives differently.
Most institutions have multiple long-term goals. These are the big, mission-related ideas that they use to orient their work. Research capacity building activities should be aligned with at least one institutional goal, and not be in contradiction with any other. You should also have specific objectives for your research capacity building activity. These objectives should be aligned with an institutional goal but are more related to the change you wish to see happen because of this programme or activity.
Let us take a moment to return to the case study – Maria and her GCP workshop.
As Maria develops her M&E Framework for her Friday presentation, she asks some of her colleagues, who will be teaching in the workshop, to join her for a brainstorming session. She finds herself thinking about the Institute for Research’s most recent strategic plan. All the teams at The Institute contributed to it, mostly providing data on their scientific output for the year, and projecting projects, funding, and new lines of research for the year ahead. Maria had been aware of the growth of The Institute over the past 10 years, but it was really brought home to her when she learned in a strategic planning meeting that The Institute for Research was aiming to become a regional centre for clinical trials. Her colleagues agree that it would have been difficult to imagine that ambitious goal a decade ago, when Maria was working on her post-doc, but now it does seem within reach. As the group debates how to frame the GCP workshop, they start within The Foundation-funded Phase III trial. After all, that is where this whole story got started. But, after some reflection, Maria suggests that they place the activity in the broader, institutional context of becoming a regional centre for clinical trials. That should be where all of this is leading, right? The group talks about what change they want to see from this GCP course. It is important to all of them that it is not just about checking a box for the Foundation. After all, like the Foundation, they themselves are interested in impact. What should be different when this activity is finished? Maria ventures that the training should improve staff capacity to carry out clinical trials in accordance with Good Clinical Practices. The group agrees: if it does not do that, it will be a failure.
Maria and her colleagues have brought together a lot of background and information about their institutional context to frame a goal and objective for their activity. But could we imagine other alternative goals or objectives? Goals and objectives are not inevitable right answers that we must figure out. Rather, they reflect the priorities and interests of people and institutions. Determining the goals and objectives of your research capacity building activity affects the entire M&E process because these priorities and interests will influence what results you expect and desire, what you want to measure to decide if these results have been accomplished, and how you will measure the change.
Session 2: M&E Frameworks: Approach & Elements
In this session, you will develop your understanding of M&E frameworks and how they are used in monitoring and evaluation.
Learning objectives
By the end of this session, you will be able to:
1. Explain the main elements of an M&E Framework;
2. Describe uses of an M&E Framework for planning; and
3. List the five steps to building an M&E Framework.
M&E frameworks
At this stage, you have seen and heard us use the term “M&E framework” many times. Before you can move forward with your projects and before we can develop more useful concepts, we need to unpack this term.
The approach to M&E that we adopt in the context of this workshop is the Results Framework, so when we refer to M&E Frameworks, that is what we mean. As you are no doubt aware, there are diverse approaches to activity and programme planning, and M&E. You have probably heard of Theory of Change, and Logical Frameworks, for example. As you move forward and develop your set of M&E tools, you may choose to investigate these and other approaches. Some are complementary to Results-Based M&E and can be used together with it.
Our results framework is a tool that we use to structure the assessment of our activities, projects, and programmes. It is developed through a design process through which we define and refine our goals and objectives, the assumptions we make about our environment, and the results we desire and expect from our research capacity building activity.
The framework should help us with:
• Planning: Developing our goals and objectives, recognising factors that can influence outcomes, establishing realistic and measurable expected results.
• Oversight: Monitoring important elements of our activities, reacting when things go off track.
• Analysing achievement: Determining the results of our activities, analysing,
and sharing data.
Five steps to build our M&E Framework
As you will see, we have already worked through the first two steps on this short list. Over the course of the face-to-face training, and the next three sessions, we will work through the remaining three items.
1. Identify the activity that will be monitored and/or evaluated.
2. State specific objectives for the activity, as well as the general objective or goal(s) of the institution to which the activity will contribute.
3. Create a results chain, using the capacity building activity as a starting point and adding the expected inputs, outputs, outcomes, and impacts (each category should have at least one component, but there may be more than one per category).
4. Create the indicators that will be used to assess each of the components listed in each of the steps of the results chain.
5. Define the sources of the data that will be used to assess the indicators and how data quality will be assured.
Developing an M&E Plan
In this workshop, we concentrate on how to develop an M&E framework only, but ideally this task should be embedded within an M&E plan. While developing a full plan is beyond the scope of what we will do here, you should know what elements it includes.
As we have mentioned before, we sometimes think of monitoring and evaluation as something that begins once an activity gets underway, but you can include this perspective from the earliest point in your activity planning. The development of an M&E plan should occur in parallel to the design of your capacity building activity. This has implications for how you design your activity to achieve your objectives. It can also influence the tools and inputs you will need like, for example, a pre-test for participants used to develop a baseline to determine change achieved because of your activity.
Anytime you include M&E in your work you will need an M&E plan – a document that describes how M&E will function during a determined period. These plans vary in complexity depending on your activity and objectives.
Elements of an M&E plan
A full M&E plan includes the following elements:
1. Introduction
Explains why the plan was created, and which activities will be monitored and evaluated.
2. Goals and objectives
Describe the goals and objectives of the M&E plan.
3. Purpose of M&E Plan
Explains how the results of the M&E will be used.
4. M&E Team
Includes who will be engaged and who will do what.
5. Implementation Plan
Details the steps that will be taken to enact the plan.
6. Audience Analysis
Discusses who will have access to the M&E results and why.
7. Results Framework
Incorporates your results framework and provides a rationale for its components.
8. Indicators
Lists the indicators chosen to measure each element of the results chain.
9. Indicator Reference Sheets
Forms to record the indicator values obtained during the M&E process.
10. Data Quality Plan
Explains how you will ensure that the data collected will be of sufficient quality.
11. Reporting Plan & Data Use
Describes how the M&E results will be reported and disseminated and how they will be used for programme improvement.
12. Mechanisms to update M&E plan
Includes the mechanisms that you will use to regularly update the M&E plan.
Session 3: The Results Chain: Components & Links
In this session, you will familiarise yourself with the results chain, its components, and how they are linked.
Learning objectives
By the end of this session, you will be able to:
1. Define the main components of the results chain;
2. Explain how the results chain components are linked; and
3. Differentiate attribution and contribution in the context of activity impact.
The Results Chain
A results chain breaks our activity, project or programme out into different components. This can help us in our design process, and well as when it comes time to analyse our results. The chain organises these components into a linear flow that suggest their relationship. Real life can be complicated – there are many factors and influences on how activities function and the results they produce. The results chain provides a model for us to think through factors within and beyond our control, to determine what our priorities are, and to look at our assumptions.
Can Maria say that workshop participants’ capacity to work on clinical trials in accordance with Good Clinical Practices has improved?
To know that, Maria would have had to first perform a baseline assessment of the participant’s GCP knowledge and skills or implementation, before the workshop. You can assess knowledge and skills through tests and implementation through direct observation. It is important to note however that professional practice is a complex concept that can be influenced by many factors, including working conditions and incentives. If Maria had completed a pre-assessment, she could compare these results with results from a similar test or observation done after participation in the workshop. With this information, Maria could assess if these successful workshop participants had increased their knowledge and skills or improved their implementation. She could assess this immediately after the workshop and after some time as well, to see if the improvement was durable.
The change in the way workshop participants implement their work is what we want to measure, and this is what we call an “outcome.” Outcomes are the changes we see in the activity targets, which here are the participants in Maria’s CGP workshop. It is quite possible to have an activity with a good output, but a poor outcome. This may happen if we see that after the activity the participants’ knowledge and skills are correct and sufficient, but that this does not translate to improved work implementation. There are many reasons why this could occur. Perhaps Maria’s course design or assessments were not adequate. Or, perhaps there are other obstacles in the work environment that prevent participants from performing their work at a higher level even once they are trained.
There are five main components to the results chain (Fig 1), which we will look at, keeping Maria and her GCP workshop in mind:
Attribution and contribution
You may have noticed that as you progress along the results chain, moving from inputs to impact, it is more difficult to determine to what degree your activity can be considered the cause of the results. In outputs, the results can be linked to the activity. This is what we call attribution. But how certain we are that the outcomes are the direct result of your activity?
For example, in our case study, increased capacity to implement Good Clinical Practices can be the result of participation in a well-planned and well-implemented workshop, but other things, like the incentive of a promotion, or changes in working conditions, or even a better working atmosphere can also contribute. In this scenario, we can say that improved GCP capacity can be partially attributed to the workshop, or that the workshop has contributed to improve staff capacity.
In the case of the impact, attribution is even more difficult to establish, as increasingly more factors may influence how and to what degree we reach our objectives. Continuing with our case, completing a clinical trial according to the funder’s requirements (specific objective) does not depend exclusively on increasing the GCP capacity, but also on other elements such as the timely availability and quality of resources (equipment and reagents), staff motivation and incentives, project management conditions, capacity to recruit and retain participants, etc.
Even if we are able to guarantee all these elements and successfully complete the research project, benefiting our institution’s reputation and, as a result, being considered a potential grant-recipient for further projects, this will also be conditioned by a number of other internal (how well other departments of the institution work, the type and quality of the institutional leadership) and external factors (the number and quality of the competitors, the evolving financial landscape of the funder). In this case, it is safer to say that the training activity has contributed to reaching the objectives of the institution, rather than trying to establish attribution.
Select a component to get started.
Figure 1. The results chain
Session 3: The Results Chain: Components & Links
In this session, you will familiarise yourself with the results chain, its components, and how they are linked.
Learning objectives
By the end of this session, you will be able to:
1. Define the main components of the results chain;
2. Explain how the results chain components are linked; and
3. Differentiate attribution and contribution in the context of activity impact.
The Results Chain
A results chain breaks our activity, project or programme out into different components. This can help us in our design process, and well as when it comes time to analyse our results. The chain organises these components into a linear flow that suggest their relationship. Real life can be complicated – there are many factors and influences on how activities function and the results they produce. The results chain provides a model for us to think through factors within and beyond our control, to determine what our priorities are, and to look at our assumptions.
Can Maria say that workshop participants’ capacity to work on clinical trials in accordance with Good Clinical Practices has improved?
To know that, Maria would have had to first perform a baseline assessment of the participant’s GCP knowledge and skills or implementation, before the workshop. You can assess knowledge and skills through tests and implementation through direct observation. It is important to note however that professional practice is a complex concept that can be influenced by many factors, including working conditions and incentives. If Maria had completed a pre-assessment, she could compare these results with results from a similar test or observation done after participation in the workshop. With this information, Maria could assess if these successful workshop participants had increased their knowledge and skills or improved their implementation. She could assess this immediately after the workshop and after some time as well, to see if the improvement was durable.
The change in the way workshop participants implement their work is what we want to measure, and this is what we call an “outcome.” Outcomes are the changes we see in the activity targets, which here are the participants in Maria’s CGP workshop. It is quite possible to have an activity with a good output, but a poor outcome. This may happen if we see that after the activity the participants’ knowledge and skills are correct and sufficient, but that this does not translate to improved work implementation. There are many reasons why this could occur. Perhaps Maria’s course design or assessments were not adequate. Or, perhaps there are other obstacles in the work environment that prevent participants from performing their work at a higher level even once they are trained.
There are five main components to the results chain (Fig 1), which we will look at, keeping Maria and her GCP workshop in mind:
Attribution and contribution
You may have noticed that as you progress along the results chain, moving from inputs to impact, it is more difficult to determine to what degree your activity can be considered the cause of the results. In outputs, the results can be linked to the activity. This is what we call attribution. But how certain we are that the outcomes are the direct result of your activity?
For example, in our case study, increased capacity to implement Good Clinical Practices can be the result of participation in a well-planned and well-implemented workshop, but other things, like the incentive of a promotion, or changes in working conditions, or even a better working atmosphere can also contribute. In this scenario, we can say that improved GCP capacity can be partially attributed to the workshop, or that the workshop has contributed to improve staff capacity.
In the case of the impact, attribution is even more difficult to establish, as increasingly more factors may influence how and to what degree we reach our objectives. Continuing with our case, completing a clinical trial according to the funder’s requirements (specific objective) does not depend exclusively on increasing the GCP capacity, but also on other elements such as the timely availability and quality of resources (equipment and reagents), staff motivation and incentives, project management conditions, capacity to recruit and retain participants, etc.
Even if we are able to guarantee all these elements and successfully complete the research project, benefiting our institution’s reputation and, as a result, being considered a potential grant-recipient for further projects, this will also be conditioned by a number of other internal (how well other departments of the institution work, the type and quality of the institutional leadership) and external factors (the number and quality of the competitors, the evolving financial landscape of the funder). In this case, it is safer to say that the training activity has contributed to reaching the objectives of the institution, rather than trying to establish attribution.
Inputs
Inputs are the financial, human, and material resources used to implement specific activities. For Maria, these include the hours she dedicated as the organiser and as a facilitator, in addition to the time put in by the three colleagues she recruited as co-facilitators. She has also included the two classrooms she reserved for the three-day, face-to-face training, some slide decks she developed, a case study, a background reading she used, and an online platform, which she used to distribute materials to the participants. Of course, all of this had to fit within the budget set by The Foundation. A budget can be an alternative way of capturing inputs by defining the financial resources that are available to pay for all necessary inputs. Individuals who are eligible to participate in the activity are also inputs (at this stage they are defined as “candidates”).
Figure 1. The results chain
Session 3: The Results Chain: Components & Links
In this session, you will familiarise yourself with the results chain, its components, and how they are linked.
Learning objectives
By the end of this session, you will be able to:
1. Define the main components of the results chain;
2. Explain how the results chain components are linked; and
3. Differentiate attribution and contribution in the context of activity impact.
The Results Chain
A results chain breaks our activity, project or programme out into different components. This can help us in our design process, and well as when it comes time to analyse our results. The chain organises these components into a linear flow that suggest their relationship. Real life can be complicated – there are many factors and influences on how activities function and the results they produce. The results chain provides a model for us to think through factors within and beyond our control, to determine what our priorities are, and to look at our assumptions.
Can Maria say that workshop participants’ capacity to work on clinical trials in accordance with Good Clinical Practices has improved?
To know that, Maria would have had to first perform a baseline assessment of the participant’s GCP knowledge and skills or implementation, before the workshop. You can assess knowledge and skills through tests and implementation through direct observation. It is important to note however that professional practice is a complex concept that can be influenced by many factors, including working conditions and incentives. If Maria had completed a pre-assessment, she could compare these results with results from a similar test or observation done after participation in the workshop. With this information, Maria could assess if these successful workshop participants had increased their knowledge and skills or improved their implementation. She could assess this immediately after the workshop and after some time as well, to see if the improvement was durable.
The change in the way workshop participants implement their work is what we want to measure, and this is what we call an “outcome.” Outcomes are the changes we see in the activity targets, which here are the participants in Maria’s CGP workshop. It is quite possible to have an activity with a good output, but a poor outcome. This may happen if we see that after the activity the participants’ knowledge and skills are correct and sufficient, but that this does not translate to improved work implementation. There are many reasons why this could occur. Perhaps Maria’s course design or assessments were not adequate. Or, perhaps there are other obstacles in the work environment that prevent participants from performing their work at a higher level even once they are trained.
There are five main components to the results chain (Fig 1), which we will look at, keeping Maria and her GCP workshop in mind:
Attribution and contribution
You may have noticed that as you progress along the results chain, moving from inputs to impact, it is more difficult to determine to what degree your activity can be considered the cause of the results. In outputs, the results can be linked to the activity. This is what we call attribution. But how certain we are that the outcomes are the direct result of your activity?
For example, in our case study, increased capacity to implement Good Clinical Practices can be the result of participation in a well-planned and well-implemented workshop, but other things, like the incentive of a promotion, or changes in working conditions, or even a better working atmosphere can also contribute. In this scenario, we can say that improved GCP capacity can be partially attributed to the workshop, or that the workshop has contributed to improve staff capacity.
In the case of the impact, attribution is even more difficult to establish, as increasingly more factors may influence how and to what degree we reach our objectives. Continuing with our case, completing a clinical trial according to the funder’s requirements (specific objective) does not depend exclusively on increasing the GCP capacity, but also on other elements such as the timely availability and quality of resources (equipment and reagents), staff motivation and incentives, project management conditions, capacity to recruit and retain participants, etc.
Even if we are able to guarantee all these elements and successfully complete the research project, benefiting our institution’s reputation and, as a result, being considered a potential grant-recipient for further projects, this will also be conditioned by a number of other internal (how well other departments of the institution work, the type and quality of the institutional leadership) and external factors (the number and quality of the competitors, the evolving financial landscape of the funder). In this case, it is safer to say that the training activity has contributed to reaching the objectives of the institution, rather than trying to establish attribution.
Activities
Activities (also called “processes”) are the actions taken or work performed through which inputs are mobilised to produce specific products or outputs. Maria’s activity was a three-day, face-to-face workshop with both theoretical and hands-on components. She included a final test to determine if participants would pass the course. Individuals who have been accepted and engage in the training are part of the activity.
Figure 1. The results chain
Session 3: The Results Chain: Components & Links
In this session, you will familiarise yourself with the results chain, its components, and how they are linked.
Learning objectives
By the end of this session, you will be able to:
1. Define the main components of the results chain;
2. Explain how the results chain components are linked; and
3. Differentiate attribution and contribution in the context of activity impact.
The Results Chain
A results chain breaks our activity, project or programme out into different components. This can help us in our design process, and well as when it comes time to analyse our results. The chain organises these components into a linear flow that suggest their relationship. Real life can be complicated – there are many factors and influences on how activities function and the results they produce. The results chain provides a model for us to think through factors within and beyond our control, to determine what our priorities are, and to look at our assumptions.
Can Maria say that workshop participants’ capacity to work on clinical trials in accordance with Good Clinical Practices has improved?
To know that, Maria would have had to first perform a baseline assessment of the participant’s GCP knowledge and skills or implementation, before the workshop. You can assess knowledge and skills through tests and implementation through direct observation. It is important to note however that professional practice is a complex concept that can be influenced by many factors, including working conditions and incentives. If Maria had completed a pre-assessment, she could compare these results with results from a similar test or observation done after participation in the workshop. With this information, Maria could assess if these successful workshop participants had increased their knowledge and skills or improved their implementation. She could assess this immediately after the workshop and after some time as well, to see if the improvement was durable.
The change in the way workshop participants implement their work is what we want to measure, and this is what we call an “outcome.” Outcomes are the changes we see in the activity targets, which here are the participants in Maria’s CGP workshop. It is quite possible to have an activity with a good output, but a poor outcome. This may happen if we see that after the activity the participants’ knowledge and skills are correct and sufficient, but that this does not translate to improved work implementation. There are many reasons why this could occur. Perhaps Maria’s course design or assessments were not adequate. Or, perhaps there are other obstacles in the work environment that prevent participants from performing their work at a higher level even once they are trained.
There are five main components to the results chain (Fig 1), which we will look at, keeping Maria and her GCP workshop in mind:
Attribution and contribution
You may have noticed that as you progress along the results chain, moving from inputs to impact, it is more difficult to determine to what degree your activity can be considered the cause of the results. In outputs, the results can be linked to the activity. This is what we call attribution. But how certain we are that the outcomes are the direct result of your activity?
For example, in our case study, increased capacity to implement Good Clinical Practices can be the result of participation in a well-planned and well-implemented workshop, but other things, like the incentive of a promotion, or changes in working conditions, or even a better working atmosphere can also contribute. In this scenario, we can say that improved GCP capacity can be partially attributed to the workshop, or that the workshop has contributed to improve staff capacity.
In the case of the impact, attribution is even more difficult to establish, as increasingly more factors may influence how and to what degree we reach our objectives. Continuing with our case, completing a clinical trial according to the funder’s requirements (specific objective) does not depend exclusively on increasing the GCP capacity, but also on other elements such as the timely availability and quality of resources (equipment and reagents), staff motivation and incentives, project management conditions, capacity to recruit and retain participants, etc.
Even if we are able to guarantee all these elements and successfully complete the research project, benefiting our institution’s reputation and, as a result, being considered a potential grant-recipient for further projects, this will also be conditioned by a number of other internal (how well other departments of the institution work, the type and quality of the institutional leadership) and external factors (the number and quality of the competitors, the evolving financial landscape of the funder). In this case, it is safer to say that the training activity has contributed to reaching the objectives of the institution, rather than trying to establish attribution.
Outputs
Outputs are direct products or deliverables of programme activities. Typical outputs of capacity building activities are participants who complete the activity (if only attendance is checked), and/or the students who are trained (for example, if they successfully pass an evaluation), the knowledge and skills acquired, or the material developed as a result of the activity. Maria set her output as Institute for Research staff members trained in Good Clinical Practices.
Figure 1. The results chain
Session 3: The Results Chain: Components & Links
In this session, you will familiarise yourself with the results chain, its components, and how they are linked.
Learning objectives
By the end of this session, you will be able to:
1. Define the main components of the results chain;
2. Explain how the results chain components are linked; and
3. Differentiate attribution and contribution in the context of activity impact.
The Results Chain
A results chain breaks our activity, project or programme out into different components. This can help us in our design process, and well as when it comes time to analyse our results. The chain organises these components into a linear flow that suggest their relationship. Real life can be complicated – there are many factors and influences on how activities function and the results they produce. The results chain provides a model for us to think through factors within and beyond our control, to determine what our priorities are, and to look at our assumptions.
Can Maria say that workshop participants’ capacity to work on clinical trials in accordance with Good Clinical Practices has improved?
To know that, Maria would have had to first perform a baseline assessment of the participant’s GCP knowledge and skills or implementation, before the workshop. You can assess knowledge and skills through tests and implementation through direct observation. It is important to note however that professional practice is a complex concept that can be influenced by many factors, including working conditions and incentives. If Maria had completed a pre-assessment, she could compare these results with results from a similar test or observation done after participation in the workshop. With this information, Maria could assess if these successful workshop participants had increased their knowledge and skills or improved their implementation. She could assess this immediately after the workshop and after some time as well, to see if the improvement was durable.
The change in the way workshop participants implement their work is what we want to measure, and this is what we call an “outcome.” Outcomes are the changes we see in the activity targets, which here are the participants in Maria’s CGP workshop. It is quite possible to have an activity with a good output, but a poor outcome. This may happen if we see that after the activity the participants’ knowledge and skills are correct and sufficient, but that this does not translate to improved work implementation. There are many reasons why this could occur. Perhaps Maria’s course design or assessments were not adequate. Or, perhaps there are other obstacles in the work environment that prevent participants from performing their work at a higher level even once they are trained.
There are five main components to the results chain (Fig 1), which we will look at, keeping Maria and her GCP workshop in mind:
Attribution and contribution
You may have noticed that as you progress along the results chain, moving from inputs to impact, it is more difficult to determine to what degree your activity can be considered the cause of the results. In outputs, the results can be linked to the activity. This is what we call attribution. But how certain we are that the outcomes are the direct result of your activity?
For example, in our case study, increased capacity to implement Good Clinical Practices can be the result of participation in a well-planned and well-implemented workshop, but other things, like the incentive of a promotion, or changes in working conditions, or even a better working atmosphere can also contribute. In this scenario, we can say that improved GCP capacity can be partially attributed to the workshop, or that the workshop has contributed to improve staff capacity.
In the case of the impact, attribution is even more difficult to establish, as increasingly more factors may influence how and to what degree we reach our objectives. Continuing with our case, completing a clinical trial according to the funder’s requirements (specific objective) does not depend exclusively on increasing the GCP capacity, but also on other elements such as the timely availability and quality of resources (equipment and reagents), staff motivation and incentives, project management conditions, capacity to recruit and retain participants, etc.
Even if we are able to guarantee all these elements and successfully complete the research project, benefiting our institution’s reputation and, as a result, being considered a potential grant-recipient for further projects, this will also be conditioned by a number of other internal (how well other departments of the institution work, the type and quality of the institutional leadership) and external factors (the number and quality of the competitors, the evolving financial landscape of the funder). In this case, it is safer to say that the training activity has contributed to reaching the objectives of the institution, rather than trying to establish attribution.
Outcomes
Outcomes are changes that occur both immediately and sometime after the activities are completed. Those changes are identified at the target population level, in our case, the participants trained in the capacity building activities. It is crucial to distinguish between outputs and outcomes, so let us look at an example:
Let us say that 50 staff members from the Institute for Research successfully completed Maria’s workshop on Good Clinical Practices, which she designs to include theoretical and practical activities. Maria knows that these 50 participants have “successfully completed the workshop” because they have all scored an 8/10 or higher on the final test she designed for the workshop, the minimum threshold to pass. Since the workshop and the final test involved both theoretical and practical exercises, we can say that these 50 workshop participants possess the GCP knowledge and skills that Maria was hoping to see. But, how do we know that these participants did not have these skills and this knowledge before they participated in the workshop? A pre-assessment of some kind – a test or direct observation – would be necessary to attribute a change in knowledge and skills to participation in the workshop.
Figure 1. The results chain
Session 3: The Results Chain: Components & Links
In this session, you will familiarise yourself with the results chain, its components, and how they are linked.
Learning objectives
By the end of this session, you will be able to:
1. Define the main components of the results chain;
2. Explain how the results chain components are linked; and
3. Differentiate attribution and contribution in the context of activity impact.
The Results Chain
A results chain breaks our activity, project or programme out into different components. This can help us in our design process, and well as when it comes time to analyse our results. The chain organises these components into a linear flow that suggest their relationship. Real life can be complicated – there are many factors and influences on how activities function and the results they produce. The results chain provides a model for us to think through factors within and beyond our control, to determine what our priorities are, and to look at our assumptions.
Can Maria say that workshop participants’ capacity to work on clinical trials in accordance with Good Clinical Practices has improved?
To know that, Maria would have had to first perform a baseline assessment of the participant’s GCP knowledge and skills or implementation, before the workshop. You can assess knowledge and skills through tests and implementation through direct observation. It is important to note however that professional practice is a complex concept that can be influenced by many factors, including working conditions and incentives. If Maria had completed a pre-assessment, she could compare these results with results from a similar test or observation done after participation in the workshop. With this information, Maria could assess if these successful workshop participants had increased their knowledge and skills or improved their implementation. She could assess this immediately after the workshop and after some time as well, to see if the improvement was durable.
The change in the way workshop participants implement their work is what we want to measure, and this is what we call an “outcome.” Outcomes are the changes we see in the activity targets, which here are the participants in Maria’s CGP workshop. It is quite possible to have an activity with a good output, but a poor outcome. This may happen if we see that after the activity the participants’ knowledge and skills are correct and sufficient, but that this does not translate to improved work implementation. There are many reasons why this could occur. Perhaps Maria’s course design or assessments were not adequate. Or, perhaps there are other obstacles in the work environment that prevent participants from performing their work at a higher level even once they are trained.
There are five main components to the results chain (Fig 1), which we will look at, keeping Maria and her GCP workshop in mind:
Attribution and contribution
You may have noticed that as you progress along the results chain, moving from inputs to impact, it is more difficult to determine to what degree your activity can be considered the cause of the results. In outputs, the results can be linked to the activity. This is what we call attribution. But how certain we are that the outcomes are the direct result of your activity?
For example, in our case study, increased capacity to implement Good Clinical Practices can be the result of participation in a well-planned and well-implemented workshop, but other things, like the incentive of a promotion, or changes in working conditions, or even a better working atmosphere can also contribute. In this scenario, we can say that improved GCP capacity can be partially attributed to the workshop, or that the workshop has contributed to improve staff capacity.
In the case of the impact, attribution is even more difficult to establish, as increasingly more factors may influence how and to what degree we reach our objectives. Continuing with our case, completing a clinical trial according to the funder’s requirements (specific objective) does not depend exclusively on increasing the GCP capacity, but also on other elements such as the timely availability and quality of resources (equipment and reagents), staff motivation and incentives, project management conditions, capacity to recruit and retain participants, etc.
Even if we are able to guarantee all these elements and successfully complete the research project, benefiting our institution’s reputation and, as a result, being considered a potential grant-recipient for further projects, this will also be conditioned by a number of other internal (how well other departments of the institution work, the type and quality of the institutional leadership) and external factors (the number and quality of the competitors, the evolving financial landscape of the funder). In this case, it is safer to say that the training activity has contributed to reaching the objectives of the institution, rather than trying to establish attribution.
Impact
Impact is usually defined as the change observed at population level. For our purposes, we will define our target population as the institution and look at impact as the change we observe at the institutional level. Here, impact is related to institutional goals.
Now, let us return to our case study and look at how Maria’s draft sketch of her results chain. What do you think of her work so far?
Figure 1. The results chain
Figure 2. Maria's draft results chain
Session 4: Indicators: Definition & Development
In this session, you will dive into indicators and learn how they are used within an M&E framework.
Learning objectives
By the end of this session, you will be able to:
1. Define what an indicator is and how they are used in M&E;
2. Formulate indicators;
3. Define expected results for indicators; and
4. Indicate data sources.
What is an indicator?
An indicator is a variable that measures one aspect of an activity.
To work as a measure of change, indicators must be variables. That is, they must vary between a baseline level, measured at the start of the activity, and another value measured after the activity has had time to generate results. Indicators measure the value of change in units that are meaningful and comparable to past and future units and values.
Finally, indicators only focus on one aspect of your activity and are narrowly defined. For a given activity, a complete set of indicators should include at least one indicator per significant element. You should not use input or output indicators only to provide evidence of results.
We have referred to indicators in terms of units and values, but keep in mind that both quantitative and qualitative indicators bring value to M&E processes – and can work within a results framework approach. Qualitative indicators can provide enormous value to M&E processes. There are many ways to develop qualitative indicators to capture this important information and still have comparable and measurable data. We can use scales, rubrics, coding, and other methods to develop these indicators. In some cases, you might also choose to incorporate qualitative methods – like case studies, interviews or focus groups -- into your M&E approach.
It is important to be aware that indicators are not:
• Anything you can think of to measure: Any activity will provide a multiplicity of opportunities for measuring things, but not every measure is an indicator. Indicators should be important to assessing the results of your activity.
• Objectives or targets: They are actual results. For example, 80% of participants passing Maria’s GCP course is not an indicator.
• Indicators do not show bias: That means they should not include words like “improved,” “increased,” or “gained.”
Why are indicators important?
Indicators enable you to reduce a large amount of data to its simplest form (number of participants who completed the training; level of adherence to good practices in task performance; number of research activities successfully completed).
When related to targets or goals, indicators can signal the need for corrective action, evaluate the effectiveness of various management actions, and provide evidence as to whether objectives are being achieved.
Five steps for selecting indicators
There are five components in the results chain: activity, inputs, outputs, outcomes, and impacts. You need a set of indicators for each component, which you will select using the following five steps:
1. Clarify the results chain
Begin by going back to your results chain. Developing clear goals and objectives is necessary to have a clear results statement. From this solid foundation, you can begin to understand what information you will want to use to develop appropriate indicators.
2. Develop a list of possible indicators
Do not limit yourself at this point – it is a brainstorming exercise meant to generate as many measurement ideas as possible. Think of internal sources of information measurements, and look elsewhere – to colleagues, experts, publications, and published evaluations -- for examples of how other activities have been measured.
3. Assess each indicator by asking yourself: Is it…
• Measurable? (Can it be quantified and measured on a scale?)
• Practical? (Can data be collected in a timely way and at a reasonable cost?)
• Reliable? (Can it be measured repeatedly and precisely by different people?)
• Relevant (Is it attributable to your activity?)
• Useful? (Do partners and stakeholders think that the information provided by the measure is important for decision making?)
• Direct? (Does it closely track to the result it is intended to measure?)
• Sensitive? (Does it serve as an early warning of changing conditions?)
• Capable of being disaggregated? (Can data be broken down by gender, age, location, or other dimensions, where appropriate?)
4. Select the “best” indicators
Now that you have some criteria for selecting indicators, you will balance information needs with cost. There is a cost associated with each piece of data you collect. If you find you have a lengthy list of indicators for one result, you will likely want to limit this list to just one or two, or maybe three, indicators per result, which meet your needs at a reasonable cost.
5. Define expected results
Once you have selected your indicators, establish expected or desired results for each one. These results should be realistic and reflect your intention for the performance of the activity. These expected results will help benchmark your M&E process and will give you a point of comparison for your final evaluation.
Proxy indicators
A proxy indicator is an indirect measure used to obtain data that is indicative of the desired result. For any proxy measure, it will be important to reiterate the measurement and its possible limitations, and then account for this limitation as data moves through your M&E system: from collection, to aggregation, and to use.
Identifying data sources
Having a list of indicators is a crucial step in the M&E framework process, but you will need data to measure them. Where does this data come from? You must consider this before you finalise your M&E plan. After all, having carefully crafted indicators and no way of measuring them is of little use. Knowing where your data will come from, and that you can reasonably, affordably, legally, and ethically obtain this information is essential.
For your input and output indicators, you likely will not have to look far to find data. In fact, your activity reports may contain the numbers you are looking for, such as number of people trained or how much an activity cost to run. For outcomes and impacts, you will need to look further (perhaps both inside and outside your institution) to obtain information.
Let us return briefly to the case study.
When it comes to defining indicators, Maria and her colleagues are careful to reduce their long list of potential measurements to just a few. They do not have many resources – either financial or time – to gather this information and analyse it. When it comes to inputs, Maria knows she will have to account for the budget provided by The Foundation, but outputs and outcomes are more complicated. The team has decided that the number of trained participants will be the output indicator but cannot agree on how to define a “trained” person. Maria argues that it must be related to a final assessment – a test, but one of her colleagues suggests that it should be defined by attendance in the workshop – all day, for three full days, rather than a test.
When it comes to defining a strong indicator for outcome, Maria is clear that she wants her indicator to be "capacity to implement clinical trials in accordance with Good Clinical Practices.” But can she realistically measure that? Maria knows she will need a baseline, and it will be challenging to formulate the right way to collect that data and collect it for 50 participants in the next few weeks. Nevertheless, she is determined to figure it out.
Session 5: Data Quality: Assessment & Management
M&E rests on access to high-quality data. In this session, we will explore how “quality” is defined in this context and what steps we will take to ensure that we use quality data.
Learning objectives
By the end of this session, you will be able to:
1. List and describe the five threats to quality data; and
2. Identify potential data threats and how to address them.
What is data quality?
Just as when we select and edit down our indicators, achieving data quality is, to some extent, about striking a balance between the highest level of quality and reasonable limits of cost. Ideally, we would eliminate all threats to quality, but this is likely not realistic. Instead, we will systematically identify threats to quality and strategically limit our exposure to these threats.
Criteria for data quality
The five criteria for data quality are validity, reliability, timeliness, precision, and integrity. For each of these five items we need to ask critical questions about our data and be aware of common threats. When we formulate our M&E framework, it will include a data plan that details the measures we will take to limit our exposure to these threats throughout the data management process.
Validity
Validity means that whatever is being measured is what we intend to measure. When we think about validity we want to go back to when we formulated our indicator and think what information we really need. Typical threats to validity include poorly defined indicators, use of proxy measures, incorrect inclusion or exclusion of data, and unreliable data sources.
Key questions for validity
• What is the relationship between the activity and what I am measuring?
• What is my data transcription process?
• Where is there potential for error in the process?
• What steps am I taking to limit transcription error (e.g., double keying of data for large surveys, built in validation checks, random checks)?
Reliability
Reliability is related to consistency. When we think about reliability, we want to check that we are measuring the same thing, in the same way each time, over time. Typical threats to reliability include human error, contextual factors, collection methodologies, collection instruments, personnel issues, and analysis and data manipulation methodologies.
Key questions for reliability
• Is the same instrument used from year to year, site to site?
• Is the same data collection process used from year to year, site to site?
• Are there procedures in place to ensure that data are free of significant error and that bias is not introduced (e.g., instructions, training, etc.)?
Timeliness
Timeliness is important because we want data to be collected and analysed during a timeframe when it is still useful and relevant to our M&E activities. Typical threats to timeliness include collection frequency, reporting frequencies, and time dependency of data.
Key questions for timeliness
• Are data available on a frequent enough basis?
• Is a regularised schedule of data collection in place?
• Are data from within the reporting period of interest?
• Are the data reported as soon as possible after collection?
Precision
Precision is about the possibility of error bias in the data and determining the margin of error that is acceptable, which will depend on the expected change from the activity. Typical threats to precision include source error / bias, instrumentation error, transcription error, and manipulation error.
Key questions for precision
• Is the margin of error less than expected change being measured?
• Are the margins of error acceptable for program decision making?
• Have issues around precision been reported?
• Would an increase in the degree of accuracy be more costly than the increased value of the information?
Integrity
Integrity is about the truthfulness of your data. Misrepresented or untrue data can be introduced to your M&E process either intentionally or unintentionally, by people or by technology. Typical threats to integrity include temptation, time, technology, corruption, personal manipulations, technological failures, and a lack of audit verification and validation.
Key questions for integrity
• Did the data collector have an incentive to misrepresent the data (i.e., pressure to meet targets)?
• Is the technology needed to collect and process data always available?
• Are data files, either on paper or electronic, safely stored?
• Are there protocols adopted and measures implemented to ensure data quality and control?
In the last session, we mentioned a debate that Maria had with a colleague regarding the definition of a “trained” person. Maria and her colleague continued to discuss this issue as a matter of opinion until they came to create a data quality plan. Then it became clear that they would have to clearly define and decide what a “trained” person is, at least in the context of Maria’s M&E framework. Why? The lack of shared definition is a major threat to the validity of Maria’s data. In the end, Maria made a compelling argument, and a trained person was defined as one who has passed the final test by achieving at least an 8/10.
Legal and ethical considerations
In addition to understanding the five criteria for data quality, and mitigating threats related to them, there are many legal and ethical considerations when it comes to collecting, handling, and reporting data. In training and education, we frequently deal with personally identifiable information. In many countries, there are strict laws that deal with how that data can be handled, shared, or used. It is important be fully aware of your obligations in this regard so that you can use data well, legally, and ethically.
On the other hand, there is a growing consensus that we should share the results of evaluations, of all kinds of projects, activities, and programmes, widely and publicly if possible. This ensures that we are transparent, and accountable for our work, and adds to our shared knowledge. Knowing how to share and even publish the results of your M&E processes, while protecting data privacy, is a priority. Laws and regulations pertaining to this issue will vary from country to country, so please be sure that are informed about these laws and any changes that may arise.
Creating a data quality plan
You will need to consider how to mitigate threats to data quality for each indicator all along the “data management process,” which has six main stages. It will be necessary to do this for all indicators used in your M&E framework when you are attempting to implement it. Please be aware that for the purposes of this workshop only, you may be required to complete a data quality plan for just a few indicators. You can find the data quality section within the M&E Framework Project template.
Steps in the data management process
Threats to data quality will vary based on your indicators and data sources, but you should be aware of some general areas for vigilance at each stage of the process.
Figure 3. Data management process
Preparing for Block Three
Congratulations on completing the face-to-face training! You should have a variety of new (or refreshed) M&E concepts and tools at the ready now. You should also have moved your M&E Framework Project forward considerably. Your facilitator(s) will give you complete information on how Block Three will function, but know that over the next days you will need to refine and polish your framework, so that it is ready for submission by the deadline established by your facilitator(s). Before you leave the training, be sure that you know all the relevant deadlines and how to get any questions that arise after you leave answered.
Select a stage of the process to get started.
Session 5: Data Quality: Assessment & Management
M&E rests on access to high-quality data. In this session, we will explore how “quality” is defined in this context and what steps we will take to ensure that we use quality data.
Learning objectives
By the end of this session, you will be able to:
1. List and describe the five threats to quality data; and
2. Identify potential data threats and how to address them.
What is data quality?
Just as when we select and edit down our indicators, achieving data quality is, to some extent, about striking a balance between the highest level of quality and reasonable limits of cost. Ideally, we would eliminate all threats to quality, but this is likely not realistic. Instead, we will systematically identify threats to quality and strategically limit our exposure to these threats.
Criteria for data quality
The five criteria for data quality are validity, reliability, timeliness, precision, and integrity. For each of these five items we need to ask critical questions about our data and be aware of common threats. When we formulate our M&E framework, it will include a data plan that details the measures we will take to limit our exposure to these threats throughout the data management process.
Validity
Validity means that whatever is being measured is what we intend to measure. When we think about validity we want to go back to when we formulated our indicator and think what information we really need. Typical threats to validity include poorly defined indicators, use of proxy measures, incorrect inclusion or exclusion of data, and unreliable data sources.
Key questions for validity
• What is the relationship between the activity and what I am measuring?
• What is my data transcription process?
• Where is there potential for error in the process?
• What steps am I taking to limit transcription error (e.g., double keying of data for large surveys, built in validation checks, random checks)?
Reliability
Reliability is related to consistency. When we think about reliability, we want to check that we are measuring the same thing, in the same way each time, over time. Typical threats to reliability include human error, contextual factors, collection methodologies, collection instruments, personnel issues, and analysis and data manipulation methodologies.
Key questions for reliability
• Is the same instrument used from year to year, site to site?
• Is the same data collection process used from year to year, site to site?
• Are there procedures in place to ensure that data are free of significant error and that bias is not introduced (e.g., instructions, training, etc.)?
Timeliness
Timeliness is important because we want data to be collected and analysed during a timeframe when it is still useful and relevant to our M&E activities. Typical threats to timeliness include collection frequency, reporting frequencies, and time dependency of data.
Key questions for timeliness
• Are data available on a frequent enough basis?
• Is a regularised schedule of data collection in place?
• Are data from within the reporting period of interest?
• Are the data reported as soon as possible after collection?
Precision
Precision is about the possibility of error bias in the data and determining the margin of error that is acceptable, which will depend on the expected change from the activity. Typical threats to precision include source error / bias, instrumentation error, transcription error, and manipulation error.
Key questions for precision
• Is the margin of error less than expected change being measured?
• Are the margins of error acceptable for program decision making?
• Have issues around precision been reported?
• Would an increase in the degree of accuracy be more costly than the increased value of the information?
Integrity
Integrity is about the truthfulness of your data. Misrepresented or untrue data can be introduced to your M&E process either intentionally or unintentionally, by people or by technology. Typical threats to integrity include temptation, time, technology, corruption, personal manipulations, technological failures, and a lack of audit verification and validation.
Key questions for integrity
• Did the data collector have an incentive to misrepresent the data (i.e., pressure to meet targets)?
• Is the technology needed to collect and process data always available?
• Are data files, either on paper or electronic, safely stored?
• Are there protocols adopted and measures implemented to ensure data quality and control?
In the last session, we mentioned a debate that Maria had with a colleague regarding the definition of a “trained” person. Maria and her colleague continued to discuss this issue as a matter of opinion until they came to create a data quality plan. Then it became clear that they would have to clearly define and decide what a “trained” person is, at least in the context of Maria’s M&E framework. Why? The lack of shared definition is a major threat to the validity of Maria’s data. In the end, Maria made a compelling argument, and a trained person was defined as one who has passed the final test by achieving at least an 8/10.
Legal and ethical considerations
In addition to understanding the five criteria for data quality, and mitigating threats related to them, there are many legal and ethical considerations when it comes to collecting, handling, and reporting data. In training and education, we frequently deal with personally identifiable information. In many countries, there are strict laws that deal with how that data can be handled, shared, or used. It is important be fully aware of your obligations in this regard so that you can use data well, legally, and ethically.
On the other hand, there is a growing consensus that we should share the results of evaluations, of all kinds of projects, activities, and programmes, widely and publicly if possible. This ensures that we are transparent, and accountable for our work, and adds to our shared knowledge. Knowing how to share and even publish the results of your M&E processes, while protecting data privacy, is a priority. Laws and regulations pertaining to this issue will vary from country to country, so please be sure that are informed about these laws and any changes that may arise.
Creating a data quality plan
You will need to consider how to mitigate threats to data quality for each indicator all along the “data management process,” which has six main stages. It will be necessary to do this for all indicators used in your M&E framework when you are attempting to implement it. Please be aware that for the purposes of this workshop only, you may be required to complete a data quality plan for just a few indicators. You can find the data quality section within the M&E Framework Project template.
Steps in the data management process
Threats to data quality will vary based on your indicators and data sources, but you should be aware of some general areas for vigilance at each stage of the process.
Figure 3. Data management process
Preparing for Block Three
Congratulations on completing the face-to-face training! You should have a variety of new (or refreshed) M&E concepts and tools at the ready now. You should also have moved your M&E Framework Project forward considerably. Your facilitator(s) will give you complete information on how Block Three will function, but know that over the next days you will need to refine and polish your framework, so that it is ready for submission by the deadline established by your facilitator(s). Before you leave the training, be sure that you know all the relevant deadlines and how to get any questions that arise after you leave answered.
Source
Threats: Data cannot be collected or is very costly or time-consuming to collect. Data requires special instruments or tools.
Possible actions: Talk to data providers and collectors in advance to establish what is and what is not possible. Use a sample to test if data can be retrieved from the source.
Session 5: Data Quality: Assessment & Management
M&E rests on access to high-quality data. In this session, we will explore how “quality” is defined in this context and what steps we will take to ensure that we use quality data.
Learning objectives
By the end of this session, you will be able to:
1. List and describe the five threats to quality data; and
2. Identify potential data threats and how to address them.
What is data quality?
Just as when we select and edit down our indicators, achieving data quality is, to some extent, about striking a balance between the highest level of quality and reasonable limits of cost. Ideally, we would eliminate all threats to quality, but this is likely not realistic. Instead, we will systematically identify threats to quality and strategically limit our exposure to these threats.
Criteria for data quality
The five criteria for data quality are validity, reliability, timeliness, precision, and integrity. For each of these five items we need to ask critical questions about our data and be aware of common threats. When we formulate our M&E framework, it will include a data plan that details the measures we will take to limit our exposure to these threats throughout the data management process.
Validity
Validity means that whatever is being measured is what we intend to measure. When we think about validity we want to go back to when we formulated our indicator and think what information we really need. Typical threats to validity include poorly defined indicators, use of proxy measures, incorrect inclusion or exclusion of data, and unreliable data sources.
Key questions for validity
• What is the relationship between the activity and what I am measuring?
• What is my data transcription process?
• Where is there potential for error in the process?
• What steps am I taking to limit transcription error (e.g., double keying of data for large surveys, built in validation checks, random checks)?
Reliability
Reliability is related to consistency. When we think about reliability, we want to check that we are measuring the same thing, in the same way each time, over time. Typical threats to reliability include human error, contextual factors, collection methodologies, collection instruments, personnel issues, and analysis and data manipulation methodologies.
Key questions for reliability
• Is the same instrument used from year to year, site to site?
• Is the same data collection process used from year to year, site to site?
• Are there procedures in place to ensure that data are free of significant error and that bias is not introduced (e.g., instructions, training, etc.)?
Timeliness
Timeliness is important because we want data to be collected and analysed during a timeframe when it is still useful and relevant to our M&E activities. Typical threats to timeliness include collection frequency, reporting frequencies, and time dependency of data.
Key questions for timeliness
• Are data available on a frequent enough basis?
• Is a regularised schedule of data collection in place?
• Are data from within the reporting period of interest?
• Are the data reported as soon as possible after collection?
Precision
Precision is about the possibility of error bias in the data and determining the margin of error that is acceptable, which will depend on the expected change from the activity. Typical threats to precision include source error / bias, instrumentation error, transcription error, and manipulation error.
Key questions for precision
• Is the margin of error less than expected change being measured?
• Are the margins of error acceptable for program decision making?
• Have issues around precision been reported?
• Would an increase in the degree of accuracy be more costly than the increased value of the information?
Integrity
Integrity is about the truthfulness of your data. Misrepresented or untrue data can be introduced to your M&E process either intentionally or unintentionally, by people or by technology. Typical threats to integrity include temptation, time, technology, corruption, personal manipulations, technological failures, and a lack of audit verification and validation.
Key questions for integrity
• Did the data collector have an incentive to misrepresent the data (i.e., pressure to meet targets)?
• Is the technology needed to collect and process data always available?
• Are data files, either on paper or electronic, safely stored?
• Are there protocols adopted and measures implemented to ensure data quality and control?
In the last session, we mentioned a debate that Maria had with a colleague regarding the definition of a “trained” person. Maria and her colleague continued to discuss this issue as a matter of opinion until they came to create a data quality plan. Then it became clear that they would have to clearly define and decide what a “trained” person is, at least in the context of Maria’s M&E framework. Why? The lack of shared definition is a major threat to the validity of Maria’s data. In the end, Maria made a compelling argument, and a trained person was defined as one who has passed the final test by achieving at least an 8/10.
Legal and ethical considerations
In addition to understanding the five criteria for data quality, and mitigating threats related to them, there are many legal and ethical considerations when it comes to collecting, handling, and reporting data. In training and education, we frequently deal with personally identifiable information. In many countries, there are strict laws that deal with how that data can be handled, shared, or used. It is important be fully aware of your obligations in this regard so that you can use data well, legally, and ethically.
On the other hand, there is a growing consensus that we should share the results of evaluations, of all kinds of projects, activities, and programmes, widely and publicly if possible. This ensures that we are transparent, and accountable for our work, and adds to our shared knowledge. Knowing how to share and even publish the results of your M&E processes, while protecting data privacy, is a priority. Laws and regulations pertaining to this issue will vary from country to country, so please be sure that are informed about these laws and any changes that may arise.
Creating a data quality plan
You will need to consider how to mitigate threats to data quality for each indicator all along the “data management process,” which has six main stages. It will be necessary to do this for all indicators used in your M&E framework when you are attempting to implement it. Please be aware that for the purposes of this workshop only, you may be required to complete a data quality plan for just a few indicators. You can find the data quality section within the M&E Framework Project template.
Steps in the data management process
Threats to data quality will vary based on your indicators and data sources, but you should be aware of some general areas for vigilance at each stage of the process.
Figure 3. Data management process
Preparing for Block Three
Congratulations on completing the face-to-face training! You should have a variety of new (or refreshed) M&E concepts and tools at the ready now. You should also have moved your M&E Framework Project forward considerably. Your facilitator(s) will give you complete information on how Block Three will function, but know that over the next days you will need to refine and polish your framework, so that it is ready for submission by the deadline established by your facilitator(s). Before you leave the training, be sure that you know all the relevant deadlines and how to get any questions that arise after you leave answered.
Collection
Threats: Data entered in instrument is incomplete, inconsistent, or mislabelled, instrument cannot be used due to location or resources, training is insufficient, and collectors are not able to use instruments, collection instrument is changed mid-stream.
Possible actions: Provide detailed trainings and observe data collection to ensure consistency and provide necessary materials for the context, document data collection process.
Session 5: Data Quality: Assessment & Management
M&E rests on access to high-quality data. In this session, we will explore how “quality” is defined in this context and what steps we will take to ensure that we use quality data.
Learning objectives
By the end of this session, you will be able to:
1. List and describe the five threats to quality data; and
2. Identify potential data threats and how to address them.
What is data quality?
Just as when we select and edit down our indicators, achieving data quality is, to some extent, about striking a balance between the highest level of quality and reasonable limits of cost. Ideally, we would eliminate all threats to quality, but this is likely not realistic. Instead, we will systematically identify threats to quality and strategically limit our exposure to these threats.
Criteria for data quality
The five criteria for data quality are validity, reliability, timeliness, precision, and integrity. For each of these five items we need to ask critical questions about our data and be aware of common threats. When we formulate our M&E framework, it will include a data plan that details the measures we will take to limit our exposure to these threats throughout the data management process.
Validity
Validity means that whatever is being measured is what we intend to measure. When we think about validity we want to go back to when we formulated our indicator and think what information we really need. Typical threats to validity include poorly defined indicators, use of proxy measures, incorrect inclusion or exclusion of data, and unreliable data sources.
Key questions for validity
• What is the relationship between the activity and what I am measuring?
• What is my data transcription process?
• Where is there potential for error in the process?
• What steps am I taking to limit transcription error (e.g., double keying of data for large surveys, built in validation checks, random checks)?
Reliability
Reliability is related to consistency. When we think about reliability, we want to check that we are measuring the same thing, in the same way each time, over time. Typical threats to reliability include human error, contextual factors, collection methodologies, collection instruments, personnel issues, and analysis and data manipulation methodologies.
Key questions for reliability
• Is the same instrument used from year to year, site to site?
• Is the same data collection process used from year to year, site to site?
• Are there procedures in place to ensure that data are free of significant error and that bias is not introduced (e.g., instructions, training, etc.)?
Timeliness
Timeliness is important because we want data to be collected and analysed during a timeframe when it is still useful and relevant to our M&E activities. Typical threats to timeliness include collection frequency, reporting frequencies, and time dependency of data.
Key questions for timeliness
• Are data available on a frequent enough basis?
• Is a regularised schedule of data collection in place?
• Are data from within the reporting period of interest?
• Are the data reported as soon as possible after collection?
Precision
Precision is about the possibility of error bias in the data and determining the margin of error that is acceptable, which will depend on the expected change from the activity. Typical threats to precision include source error / bias, instrumentation error, transcription error, and manipulation error.
Key questions for precision
• Is the margin of error less than expected change being measured?
• Are the margins of error acceptable for program decision making?
• Have issues around precision been reported?
• Would an increase in the degree of accuracy be more costly than the increased value of the information?
Integrity
Integrity is about the truthfulness of your data. Misrepresented or untrue data can be introduced to your M&E process either intentionally or unintentionally, by people or by technology. Typical threats to integrity include temptation, time, technology, corruption, personal manipulations, technological failures, and a lack of audit verification and validation.
Key questions for integrity
• Did the data collector have an incentive to misrepresent the data (i.e., pressure to meet targets)?
• Is the technology needed to collect and process data always available?
• Are data files, either on paper or electronic, safely stored?
• Are there protocols adopted and measures implemented to ensure data quality and control?
In the last session, we mentioned a debate that Maria had with a colleague regarding the definition of a “trained” person. Maria and her colleague continued to discuss this issue as a matter of opinion until they came to create a data quality plan. Then it became clear that they would have to clearly define and decide what a “trained” person is, at least in the context of Maria’s M&E framework. Why? The lack of shared definition is a major threat to the validity of Maria’s data. In the end, Maria made a compelling argument, and a trained person was defined as one who has passed the final test by achieving at least an 8/10.
Legal and ethical considerations
In addition to understanding the five criteria for data quality, and mitigating threats related to them, there are many legal and ethical considerations when it comes to collecting, handling, and reporting data. In training and education, we frequently deal with personally identifiable information. In many countries, there are strict laws that deal with how that data can be handled, shared, or used. It is important be fully aware of your obligations in this regard so that you can use data well, legally, and ethically.
On the other hand, there is a growing consensus that we should share the results of evaluations, of all kinds of projects, activities, and programmes, widely and publicly if possible. This ensures that we are transparent, and accountable for our work, and adds to our shared knowledge. Knowing how to share and even publish the results of your M&E processes, while protecting data privacy, is a priority. Laws and regulations pertaining to this issue will vary from country to country, so please be sure that are informed about these laws and any changes that may arise.
Creating a data quality plan
You will need to consider how to mitigate threats to data quality for each indicator all along the “data management process,” which has six main stages. It will be necessary to do this for all indicators used in your M&E framework when you are attempting to implement it. Please be aware that for the purposes of this workshop only, you may be required to complete a data quality plan for just a few indicators. You can find the data quality section within the M&E Framework Project template.
Steps in the data management process
Threats to data quality will vary based on your indicators and data sources, but you should be aware of some general areas for vigilance at each stage of the process.
Figure 3. Data management process
Preparing for Block Three
Congratulations on completing the face-to-face training! You should have a variety of new (or refreshed) M&E concepts and tools at the ready now. You should also have moved your M&E Framework Project forward considerably. Your facilitator(s) will give you complete information on how Block Three will function, but know that over the next days you will need to refine and polish your framework, so that it is ready for submission by the deadline established by your facilitator(s). Before you leave the training, be sure that you know all the relevant deadlines and how to get any questions that arise after you leave answered.
Collation (compilation of data following its collection)
Threats: Transcription errors, lost files or misplaced data, technical problems when compiling data sets
Possible actions: Establish protocols and / or checklists, randomly sample and verify data, make note of errors you discover and report them.
Session 5: Data Quality: Assessment & Management
M&E rests on access to high-quality data. In this session, we will explore how “quality” is defined in this context and what steps we will take to ensure that we use quality data.
Learning objectives
By the end of this session, you will be able to:
1. List and describe the five threats to quality data; and
2. Identify potential data threats and how to address them.
What is data quality?
Just as when we select and edit down our indicators, achieving data quality is, to some extent, about striking a balance between the highest level of quality and reasonable limits of cost. Ideally, we would eliminate all threats to quality, but this is likely not realistic. Instead, we will systematically identify threats to quality and strategically limit our exposure to these threats.
Criteria for data quality
The five criteria for data quality are validity, reliability, timeliness, precision, and integrity. For each of these five items we need to ask critical questions about our data and be aware of common threats. When we formulate our M&E framework, it will include a data plan that details the measures we will take to limit our exposure to these threats throughout the data management process.
Validity
Validity means that whatever is being measured is what we intend to measure. When we think about validity we want to go back to when we formulated our indicator and think what information we really need. Typical threats to validity include poorly defined indicators, use of proxy measures, incorrect inclusion or exclusion of data, and unreliable data sources.
Key questions for validity
• What is the relationship between the activity and what I am measuring?
• What is my data transcription process?
• Where is there potential for error in the process?
• What steps am I taking to limit transcription error (e.g., double keying of data for large surveys, built in validation checks, random checks)?
Reliability
Reliability is related to consistency. When we think about reliability, we want to check that we are measuring the same thing, in the same way each time, over time. Typical threats to reliability include human error, contextual factors, collection methodologies, collection instruments, personnel issues, and analysis and data manipulation methodologies.
Key questions for reliability
• Is the same instrument used from year to year, site to site?
• Is the same data collection process used from year to year, site to site?
• Are there procedures in place to ensure that data are free of significant error and that bias is not introduced (e.g., instructions, training, etc.)?
Timeliness
Timeliness is important because we want data to be collected and analysed during a timeframe when it is still useful and relevant to our M&E activities. Typical threats to timeliness include collection frequency, reporting frequencies, and time dependency of data.
Key questions for timeliness
• Are data available on a frequent enough basis?
• Is a regularised schedule of data collection in place?
• Are data from within the reporting period of interest?
• Are the data reported as soon as possible after collection?
Precision
Precision is about the possibility of error bias in the data and determining the margin of error that is acceptable, which will depend on the expected change from the activity. Typical threats to precision include source error / bias, instrumentation error, transcription error, and manipulation error.
Key questions for precision
• Is the margin of error less than expected change being measured?
• Are the margins of error acceptable for program decision making?
• Have issues around precision been reported?
• Would an increase in the degree of accuracy be more costly than the increased value of the information?
Integrity
Integrity is about the truthfulness of your data. Misrepresented or untrue data can be introduced to your M&E process either intentionally or unintentionally, by people or by technology. Typical threats to integrity include temptation, time, technology, corruption, personal manipulations, technological failures, and a lack of audit verification and validation.
Key questions for integrity
• Did the data collector have an incentive to misrepresent the data (i.e., pressure to meet targets)?
• Is the technology needed to collect and process data always available?
• Are data files, either on paper or electronic, safely stored?
• Are there protocols adopted and measures implemented to ensure data quality and control?
In the last session, we mentioned a debate that Maria had with a colleague regarding the definition of a “trained” person. Maria and her colleague continued to discuss this issue as a matter of opinion until they came to create a data quality plan. Then it became clear that they would have to clearly define and decide what a “trained” person is, at least in the context of Maria’s M&E framework. Why? The lack of shared definition is a major threat to the validity of Maria’s data. In the end, Maria made a compelling argument, and a trained person was defined as one who has passed the final test by achieving at least an 8/10.
Legal and ethical considerations
In addition to understanding the five criteria for data quality, and mitigating threats related to them, there are many legal and ethical considerations when it comes to collecting, handling, and reporting data. In training and education, we frequently deal with personally identifiable information. In many countries, there are strict laws that deal with how that data can be handled, shared, or used. It is important be fully aware of your obligations in this regard so that you can use data well, legally, and ethically.
On the other hand, there is a growing consensus that we should share the results of evaluations, of all kinds of projects, activities, and programmes, widely and publicly if possible. This ensures that we are transparent, and accountable for our work, and adds to our shared knowledge. Knowing how to share and even publish the results of your M&E processes, while protecting data privacy, is a priority. Laws and regulations pertaining to this issue will vary from country to country, so please be sure that are informed about these laws and any changes that may arise.
Creating a data quality plan
You will need to consider how to mitigate threats to data quality for each indicator all along the “data management process,” which has six main stages. It will be necessary to do this for all indicators used in your M&E framework when you are attempting to implement it. Please be aware that for the purposes of this workshop only, you may be required to complete a data quality plan for just a few indicators. You can find the data quality section within the M&E Framework Project template.
Steps in the data management process
Threats to data quality will vary based on your indicators and data sources, but you should be aware of some general areas for vigilance at each stage of the process.
Figure 3. Data management process
Preparing for Block Three
Congratulations on completing the face-to-face training! You should have a variety of new (or refreshed) M&E concepts and tools at the ready now. You should also have moved your M&E Framework Project forward considerably. Your facilitator(s) will give you complete information on how Block Three will function, but know that over the next days you will need to refine and polish your framework, so that it is ready for submission by the deadline established by your facilitator(s). Before you leave the training, be sure that you know all the relevant deadlines and how to get any questions that arise after you leave answered.
Analysis
Threats: Technological problems, use of inadequate analysis techniques, incorrect assumptions.
Possible actions: Explicitly describe analysis techniques and assumptions, verify tools used for analysis, ask for expert opinion.
Session 5: Data Quality: Assessment & Management
M&E rests on access to high-quality data. In this session, we will explore how “quality” is defined in this context and what steps we will take to ensure that we use quality data.
Learning objectives
By the end of this session, you will be able to:
1. List and describe the five threats to quality data; and
2. Identify potential data threats and how to address them.
What is data quality?
Just as when we select and edit down our indicators, achieving data quality is, to some extent, about striking a balance between the highest level of quality and reasonable limits of cost. Ideally, we would eliminate all threats to quality, but this is likely not realistic. Instead, we will systematically identify threats to quality and strategically limit our exposure to these threats.
Criteria for data quality
The five criteria for data quality are validity, reliability, timeliness, precision, and integrity. For each of these five items we need to ask critical questions about our data and be aware of common threats. When we formulate our M&E framework, it will include a data plan that details the measures we will take to limit our exposure to these threats throughout the data management process.
Validity
Validity means that whatever is being measured is what we intend to measure. When we think about validity we want to go back to when we formulated our indicator and think what information we really need. Typical threats to validity include poorly defined indicators, use of proxy measures, incorrect inclusion or exclusion of data, and unreliable data sources.
Key questions for validity
• What is the relationship between the activity and what I am measuring?
• What is my data transcription process?
• Where is there potential for error in the process?
• What steps am I taking to limit transcription error (e.g., double keying of data for large surveys, built in validation checks, random checks)?
Reliability
Reliability is related to consistency. When we think about reliability, we want to check that we are measuring the same thing, in the same way each time, over time. Typical threats to reliability include human error, contextual factors, collection methodologies, collection instruments, personnel issues, and analysis and data manipulation methodologies.
Key questions for reliability
• Is the same instrument used from year to year, site to site?
• Is the same data collection process used from year to year, site to site?
• Are there procedures in place to ensure that data are free of significant error and that bias is not introduced (e.g., instructions, training, etc.)?
Timeliness
Timeliness is important because we want data to be collected and analysed during a timeframe when it is still useful and relevant to our M&E activities. Typical threats to timeliness include collection frequency, reporting frequencies, and time dependency of data.
Key questions for timeliness
• Are data available on a frequent enough basis?
• Is a regularised schedule of data collection in place?
• Are data from within the reporting period of interest?
• Are the data reported as soon as possible after collection?
Precision
Precision is about the possibility of error bias in the data and determining the margin of error that is acceptable, which will depend on the expected change from the activity. Typical threats to precision include source error / bias, instrumentation error, transcription error, and manipulation error.
Key questions for precision
• Is the margin of error less than expected change being measured?
• Are the margins of error acceptable for program decision making?
• Have issues around precision been reported?
• Would an increase in the degree of accuracy be more costly than the increased value of the information?
Integrity
Integrity is about the truthfulness of your data. Misrepresented or untrue data can be introduced to your M&E process either intentionally or unintentionally, by people or by technology. Typical threats to integrity include temptation, time, technology, corruption, personal manipulations, technological failures, and a lack of audit verification and validation.
Key questions for integrity
• Did the data collector have an incentive to misrepresent the data (i.e., pressure to meet targets)?
• Is the technology needed to collect and process data always available?
• Are data files, either on paper or electronic, safely stored?
• Are there protocols adopted and measures implemented to ensure data quality and control?
In the last session, we mentioned a debate that Maria had with a colleague regarding the definition of a “trained” person. Maria and her colleague continued to discuss this issue as a matter of opinion until they came to create a data quality plan. Then it became clear that they would have to clearly define and decide what a “trained” person is, at least in the context of Maria’s M&E framework. Why? The lack of shared definition is a major threat to the validity of Maria’s data. In the end, Maria made a compelling argument, and a trained person was defined as one who has passed the final test by achieving at least an 8/10.
Legal and ethical considerations
In addition to understanding the five criteria for data quality, and mitigating threats related to them, there are many legal and ethical considerations when it comes to collecting, handling, and reporting data. In training and education, we frequently deal with personally identifiable information. In many countries, there are strict laws that deal with how that data can be handled, shared, or used. It is important be fully aware of your obligations in this regard so that you can use data well, legally, and ethically.
On the other hand, there is a growing consensus that we should share the results of evaluations, of all kinds of projects, activities, and programmes, widely and publicly if possible. This ensures that we are transparent, and accountable for our work, and adds to our shared knowledge. Knowing how to share and even publish the results of your M&E processes, while protecting data privacy, is a priority. Laws and regulations pertaining to this issue will vary from country to country, so please be sure that are informed about these laws and any changes that may arise.
Creating a data quality plan
You will need to consider how to mitigate threats to data quality for each indicator all along the “data management process,” which has six main stages. It will be necessary to do this for all indicators used in your M&E framework when you are attempting to implement it. Please be aware that for the purposes of this workshop only, you may be required to complete a data quality plan for just a few indicators. You can find the data quality section within the M&E Framework Project template.
Steps in the data management process
Threats to data quality will vary based on your indicators and data sources, but you should be aware of some general areas for vigilance at each stage of the process.
Figure 3. Data management process
Preparing for Block Three
Congratulations on completing the face-to-face training! You should have a variety of new (or refreshed) M&E concepts and tools at the ready now. You should also have moved your M&E Framework Project forward considerably. Your facilitator(s) will give you complete information on how Block Three will function, but know that over the next days you will need to refine and polish your framework, so that it is ready for submission by the deadline established by your facilitator(s). Before you leave the training, be sure that you know all the relevant deadlines and how to get any questions that arise after you leave answered.
Reporting
Threats: Violations of data privacy, using a format or synthesis that is not useful to audience, selective use of results, creating a narrative that does not match the data, simple errors.
Possible actions: Use an external reviewer to get feedback on reports, create reports for a specific audience (perhaps you will need multiple reports), be honest in your reporting include all your results – not just ones that look good, lead with the data, protect data privacy and confidentiality when reporting.
Session 5: Data Quality: Assessment & Management
M&E rests on access to high-quality data. In this session, we will explore how “quality” is defined in this context and what steps we will take to ensure that we use quality data.
Learning objectives
By the end of this session, you will be able to:
1. List and describe the five threats to quality data; and
2. Identify potential data threats and how to address them.
What is data quality?
Just as when we select and edit down our indicators, achieving data quality is, to some extent, about striking a balance between the highest level of quality and reasonable limits of cost. Ideally, we would eliminate all threats to quality, but this is likely not realistic. Instead, we will systematically identify threats to quality and strategically limit our exposure to these threats.
Criteria for data quality
The five criteria for data quality are validity, reliability, timeliness, precision, and integrity. For each of these five items we need to ask critical questions about our data and be aware of common threats. When we formulate our M&E framework, it will include a data plan that details the measures we will take to limit our exposure to these threats throughout the data management process.
Validity
Validity means that whatever is being measured is what we intend to measure. When we think about validity we want to go back to when we formulated our indicator and think what information we really need. Typical threats to validity include poorly defined indicators, use of proxy measures, incorrect inclusion or exclusion of data, and unreliable data sources.
Key questions for validity
• What is the relationship between the activity and what I am measuring?
• What is my data transcription process?
• Where is there potential for error in the process?
• What steps am I taking to limit transcription error (e.g., double keying of data for large surveys, built in validation checks, random checks)?
Reliability
Reliability is related to consistency. When we think about reliability, we want to check that we are measuring the same thing, in the same way each time, over time. Typical threats to reliability include human error, contextual factors, collection methodologies, collection instruments, personnel issues, and analysis and data manipulation methodologies.
Key questions for reliability
• Is the same instrument used from year to year, site to site?
• Is the same data collection process used from year to year, site to site?
• Are there procedures in place to ensure that data are free of significant error and that bias is not introduced (e.g., instructions, training, etc.)?
Timeliness
Timeliness is important because we want data to be collected and analysed during a timeframe when it is still useful and relevant to our M&E activities. Typical threats to timeliness include collection frequency, reporting frequencies, and time dependency of data.
Key questions for timeliness
• Are data available on a frequent enough basis?
• Is a regularised schedule of data collection in place?
• Are data from within the reporting period of interest?
• Are the data reported as soon as possible after collection?
Precision
Precision is about the possibility of error bias in the data and determining the margin of error that is acceptable, which will depend on the expected change from the activity. Typical threats to precision include source error / bias, instrumentation error, transcription error, and manipulation error.
Key questions for precision
• Is the margin of error less than expected change being measured?
• Are the margins of error acceptable for program decision making?
• Have issues around precision been reported?
• Would an increase in the degree of accuracy be more costly than the increased value of the information?
Integrity
Integrity is about the truthfulness of your data. Misrepresented or untrue data can be introduced to your M&E process either intentionally or unintentionally, by people or by technology. Typical threats to integrity include temptation, time, technology, corruption, personal manipulations, technological failures, and a lack of audit verification and validation.
Key questions for integrity
• Did the data collector have an incentive to misrepresent the data (i.e., pressure to meet targets)?
• Is the technology needed to collect and process data always available?
• Are data files, either on paper or electronic, safely stored?
• Are there protocols adopted and measures implemented to ensure data quality and control?
In the last session, we mentioned a debate that Maria had with a colleague regarding the definition of a “trained” person. Maria and her colleague continued to discuss this issue as a matter of opinion until they came to create a data quality plan. Then it became clear that they would have to clearly define and decide what a “trained” person is, at least in the context of Maria’s M&E framework. Why? The lack of shared definition is a major threat to the validity of Maria’s data. In the end, Maria made a compelling argument, and a trained person was defined as one who has passed the final test by achieving at least an 8/10.
Legal and ethical considerations
In addition to understanding the five criteria for data quality, and mitigating threats related to them, there are many legal and ethical considerations when it comes to collecting, handling, and reporting data. In training and education, we frequently deal with personally identifiable information. In many countries, there are strict laws that deal with how that data can be handled, shared, or used. It is important be fully aware of your obligations in this regard so that you can use data well, legally, and ethically.
On the other hand, there is a growing consensus that we should share the results of evaluations, of all kinds of projects, activities, and programmes, widely and publicly if possible. This ensures that we are transparent, and accountable for our work, and adds to our shared knowledge. Knowing how to share and even publish the results of your M&E processes, while protecting data privacy, is a priority. Laws and regulations pertaining to this issue will vary from country to country, so please be sure that are informed about these laws and any changes that may arise.
Creating a data quality plan
You will need to consider how to mitigate threats to data quality for each indicator all along the “data management process,” which has six main stages. It will be necessary to do this for all indicators used in your M&E framework when you are attempting to implement it. Please be aware that for the purposes of this workshop only, you may be required to complete a data quality plan for just a few indicators. You can find the data quality section within the M&E Framework Project template.
Steps in the data management process
Threats to data quality will vary based on your indicators and data sources, but you should be aware of some general areas for vigilance at each stage of the process.
Figure 3. Data management process
Preparing for Block Three
Congratulations on completing the face-to-face training! You should have a variety of new (or refreshed) M&E concepts and tools at the ready now. You should also have moved your M&E Framework Project forward considerably. Your facilitator(s) will give you complete information on how Block Three will function, but know that over the next days you will need to refine and polish your framework, so that it is ready for submission by the deadline established by your facilitator(s). Before you leave the training, be sure that you know all the relevant deadlines and how to get any questions that arise after you leave answered.
Usage
Threats: Creating a story and fitting the data to it, withholding data from all or specific audiences, not fully understanding the data.
Possible actions: Look at and analyse the data before deciding what it means, share all relevant data, make sure you understand your own data, discuss it in your team.
Part III. Block Two M&E
– For Research Capacity Building F2F Training