IMC Section 5
Evaluation Overview
The evaluation of a public health intervention addressing vaccine hesitancy among parents, especially regarding COVID-19 vaccination for daycare-aged children, is crucial for improving the program and ensuring its success. The goal is to ensure that the program is implemented effectively, reaches the intended audience, and achieves the desired outcomes. Practical program evaluation will track resources, program activities, and the impact on vaccination behavior: IMC Section 5.
Program Monitoring and Evaluation Process
Monitoring and evaluation (M&E) work together to assess intervention effectiveness. Monitoring tracks resources, activities, and adherence to timelines, ensuring accountability and early detection of deviations. Key components include funding, delivered activities, payments, and program timelines. Additionally, assessing fidelity, target audience reach, and perceived issues helps maintain program integrity.
Evaluation determines if the program achieved its goals by examining fidelity, exposure levels, and behavioral changes. Engaging stakeholders in designing evaluation questions ensures relevance and usefulness. Considering who will use the information, what they need to know, and when it is most useful makes evaluation a valuable tool for program improvement. Through systematic M&E, the program remains effective, responsive, and aligned with its objectives.
Key Evaluation Questions
Was fidelity to the intervention plan maintained?
This question focuses on whether the program was implemented according to the original plan. This is essential for ensuring that the intervention is delivered as intended. Maintaining fidelity ensures that the program does not deviate from its core strategies, which could affect the results.
Were exposure levels adequate to make a measurable difference?
This question addresses whether the program reached enough individuals (in this case, parents of daycare-aged children) with enough frequency to significantly impact vaccine acceptance. Exposure is critical to determining whether the intervention had the desired effect on attitudes toward vaccination.
Eliminated Evaluation Question
Were there any unintended effects?
While understanding unintended effects is essential in some cases, for this particular intervention, the focus will be on ensuring program fidelity and exposure levels, as these are more directly tied to the program’s success in achieving its intended outcomes.
Evaluation Methods
The data for this evaluation will be collected using both qualitative and quantitative methods to ensure a comprehensive understanding of the intervention’s impact.
Quantitative Method
Survey
A structured survey will collect numerical data on parental attitudes toward the COVID-19 vaccine before and after exposure to the intervention (Gerretsen et al., 2021). Questions will assess knowledge, attitudes, and behavioral intentions regarding vaccination.
Sample Questions (Likert-style)
- I believe that the COVID-19 vaccine is safe for children (1-5 scale: Strongly Disagree to Strongly Agree).
- I feel confident in the government’s efforts to promote childhood vaccination (1-5 scale).
- If I could get the COVID-19 vaccine for daycare-aged children (1-5 scale), I would allow my child to get it.
Qualitative Method
Semi-structured Interviews
A sample of parents will be interviewed in depth to gather information about how they made decisions about the vaccine. The interviews will also provide insights into the reasons for vaccine hesitancy and the effect it had on their attitudes.
Data Collection Plan
The following will be done in order to ensure data collection is consistent and can be effective:
Pre-Intervention and Post-Intervention Surveys
Before and after the intervention, surveys will be distributed to a sample of parents to measure changes in attitudes and vaccine intention (Lazarus et al., 2022). Attitudes will be compared before and after exposure to the campaign.
Interviews with Parents
Qualitative data will be obtained from a set of interviews with parents who participated in the program (Sallam, 2021). The interviews will focus on how the intervention changed their perspectives on vaccination and whether it addressed their concerns.
Program Monitoring
The program’s implementation will be routinely tracked to make sure the activities are taking place as planned. This includes monitoring materials given out, events, and outreach. Attendance records and feedback forms will be used to collect data on participation and engagement.
Outcome Measures
The evaluation will track the following outcomes based on the SMART objectives of the program:
Objective 1 (Knowledge Increase)
Measure the increase in parental knowledge about the safety and benefits of the COVID-19 vaccine.
Metric
Pre- and post-intervention survey results on vaccine knowledge.
Objective 2 (Behavioral Intention)
Measure the change in parents’ willingness to vaccinate their children against COVID-19.
Metric
Change in survey responses regarding vaccine acceptance.
Objective 3 (Program Reach)
Track the number of parents reached by the intervention.
Metric
Number of surveys completed and parents engaged in events.
Objective 4 (Satisfaction with Intervention)
Measure parent satisfaction with the program and its materials.
Metric
Post-intervention satisfaction surveys and focus group feedback.
Interpretation and Reporting
Results will be analyzed to identify patterns of change in attitudes and behaviors. Changes in parental intentions regarding vaccination will be compared with the level of exposure to the intervention to assess its effectiveness. A final report will be produced summarizing the findings, with recommendations for improving future interventions.
Updated Timeline with M&E Milestones
Campaign Duration: 12 months
| Months | Activity | Monitoring/Evaluation Tasks | Estimated Cost |
| Months 1-2 | Develop content and approach media | Design pre-intervention surveys and develop evaluation tools. | $3,000 (survey development and initial staff costs) |
| Months 3-4 | Paid media and earned media campaigns start | Distribute pre-intervention surveys. Monitor reach of media (TV, radio, Facebook). | $2,500 (survey distribution, tracking reach) |
| Months 5-6 | Social media campaigns and website launch | Track social media engagement. Mid-term survey and feedback collection. | $4,000 (survey analysis, social media tracking tools) |
| Months 7-12 | Track progress, refine strategies, and assess impact | Ongoing program monitoring. Post-intervention surveys, interviews, and data analysis. | $6,000 (final surveys/interviews, data analysis) |
Updated Budget with Monitoring and Evaluation
| Item | Estimated Cost | Notes |
| Paid Media (TV, Radio, Facebook Ads) | $30,000 | Budget for media buys, advertisements, content creation, and tracking effectiveness via surveys. |
| Earned Media (Interviews, Stories) | $10,000 | Costs for media outreach, press materials, and evaluating media engagement (via surveys). |
| Social Media Campaigns | $15,000 | Content creation, ad spend, influencer partnerships, and tracking social media engagement. |
| Website Development & Hosting | $5,000 | Design, hosting fees, and website maintenance. Incorporate tracking mechanisms for website analytics. |
| Email Campaign | $2,000 | Email platform subscription and design costs. Evaluate effectiveness through email open/click rates. |
| Event Costs (Workshops, Meetings) | $3,000 | Venue, catering, and materials for workshops. Include attendance tracking and participant feedback. |
| Outreach Materials (Flyers, Brochures) | $3,000 | Printing and distribution. Monitor distribution through event feedback and outreach surveys. |
| Survey Tools & Software | $1,500 | For pre- and post-intervention surveys (e.g., SurveyMonkey/Qualtrics subscription). |
| Participant Incentives | $1,500 | Incentives for participants completing surveys or attending workshops. |
| Personnel Costs (for monitoring and evaluation) | $10,000 | Includes staff for data collection, tracking engagement, analyzing survey results, and reporting. |
| Data Analysis Software | $1,000 | Tools like SPSS or other software for analyzing survey data. |
| Miscellaneous (Printing, Transportation, etc.) | $2,000 | Miscellaneous costs related to the project (e.g., transportation for interviews, printing materials). |
Total Campaign Budget: $100,000
References
Gerretsen, P., Kim, J., Caravaggio, F., Quilty, L., Sanches, M., Wells, S., Brown, E. E., Agic, B., Pollock, B. G., & Graff-Guerrero, A. (2021). Individual determinants of COVID-19 vaccine hesitancy. PLOS ONE, 16(11), e0258462. https://doi.org/10.1371/journal.pone.0258462
Lazarus, J. V., Wyka, K., White, T. M., Picchio, C. A., Rabin, K., Ratzan, S. C., Parsons Leigh, J., Hu, J., & El-Mohandes, A. (2022). Revisiting COVID-19 vaccine hesitancy around the world using data from 23 countries in 2021. Nature Communications, 13(1). https://doi.org/10.1038/s41467-022-31441-x
Sallam, M. (2021). COVID-19 vaccine hesitancy worldwide: A concise systematic review of vaccine acceptance rates. Vaccines, 9(2), 160. https://doi.org/10.3390/vaccines9020160
ORDER A PLAGIARISM-FREE PAPER HERE
We’ll write everything from scratch
Question
Week 5/Section 5: Evaluation- Due on Sun. Feb. 23rd at 11:59 p.m. (*Should be a SHORT section–2 pages MAX!)
Evaluation
BACKGROUND INFO: What it is: Effective program evaluation is a systematic way to improve and account for public health actions by involving procedures that are useful, feasible, ethical, and accurate.
Evaluation activities should be:
- useful (i.e., responsive to stakeholder information needs)
- feasible given time, resources, and available expertise
- accurate enough to inform the kinds of decisions to be made
- proper/ethical
How it is Done:
- Identify program elements to monitor.
- Monitoring and evaluation are mutually supportive ways of asking if your program is working. Program monitoring is essential for management and accountability. It is an ongoing process that tracks:
- the resources invested in the program
- the number and quality of activities the program offers
- adherence to timelines and budgets
- Monitoring is often called process evaluation. You will always need to track process variables such as:
- funding received
- products and services delivered
- payments made
- other resources contributed to and expended by the program
- program activities
- adherence to timelines
- You will also want to know:
- whether the program is being implemented as planned (fidelity)
- how well the program is reaching your target audience (reach)
- whether staff and representative participants see problems
- To decide which components of the program to monitor, ask yourself who will use the information and how, what resources are available, and whether the data can be collected in a technically sound and ethical manner.
- Engage stakeholders in the (hypothetical) planning process. Trim your list of potential questions by asking who will use the information and what they care most about. Stakeholders want various kinds of input into evaluation plans, depending on their levels of investment in the program and their interest and experience in program evaluation. Find out from stakeholders:
- what they want to know
- how they will use the information
- when the data must be available in order to be useful
- Monitoring and evaluation are mutually supportive ways of asking if your program is working. Program monitoring is essential for management and accountability. It is an ongoing process that tracks:
- LIST TO REFERENCE: Select TWO key evaluation questions.
- Basic evaluation questions MAY include (you can propose another/new one if it more is applicable):
- Was fidelity to the intervention plan maintained?
- Were exposure levels adequate to make a measurable difference?
- Were behavioral determinants affected by (or associated with) intervention exposures as predicted?
- Did the determinants, in turn, affect behavior as predicted (i.e., was the internal logic of the intervention valid)?
- Can any other event or influence explain the observed effects attributed to the intervention? (confounding variables
- Were there any unintended effects?
- Based on the research that you have done on your key stakeholders so far, go into detail on 2 of your evaluation questions (from the above list or more appropriate ones that you propose on your own) and how they’ll be utilized. Likewise, recommend eliminating one of the questions proposed above that is likely unnecessary/unhelpful, based on this rationale–one that you won’t use/need.
IMC Section 5
- Basic evaluation questions MAY include (you can propose another/new one if it more is applicable):
- Determine how the information will be (hypothetically) gathered. *Note you will make recommendations here – not actually collect data!
- Try to find measures that can detect deviations from program plans quickly. Whether you are tracking the number of brochures distributed, the due dates of bills to pay, or the number of program participants who report being satisfied, monitoring data collection should be a routine function. It should be built into daily record-keeping and integrated into program management.
- Each outcome included in the evaluation (SMART Objectives) needs at least one strong measure to indicate whether the program is being successful in that regard. It’s wise to have multiple measures of major outcomes. List these out, and detail them as much as possible. **PULL THESE METRICS FROM YOUR SMART OBJECTIVES. 4 SMART OBJECTIVES (1 per comm. intervention) = 4 metrics!
- Multiple measures of key outcomes provide cross-validation when their findings agree.
- If you are seeking permanent behavior change (e.g., smoking cessation), define end-points for your intervention and final outcome evaluation that are far enough out to assess permanent change as your field defines it.
- Sources of evaluation data may include all the sources listed earlier for monitoring and also:
- extensive participant interviews or surveys
- archival documents
- direct observations
- Multiple sources will provide different perspectives about the program and thus enhance the credibility of your evaluation. Mixing internal and external perspectives provides a more comprehensive view of the program.
- Choose the data collection method best suited to answering each evaluation question. Bear in mind that good data collection plans often integrate qualitative methods (those that produce descriptive information) with quantitative methods (those that generate numerical data such as frequencies, percentages or rates).
- WHAT THIS MEANS: Qualitative methods add depth, detail and meaning to your research. However, quantitative evidence is usually needed to show that a program increased or decreased the frequency of some health behavior.
- Commonly used qualitative methods include:
- participant observation
- unstructured and semi-structured interviews
- focus groups
- document theme coding
- Quantitative data provide useful background information to help interpret qualitative data. The integration of qualitative and quantitative information can increase the chances that the evidence base will be balanced, helping to meet the needs and expectations of diverse users.
- Examples of quantitative methods are: surveys (via telephone, internet, laptop computer, face-to-face, etc.), numeric coding of clinic records and other documents, structured observations
- The monitoring and evaluation questions and methods that will be used and where they came from (for the purposes of this project, you need to recommend at least one quantitative method and one qualitative method! (If you are a graduate student, your qualitative interview can also be addressed here–but it is “formative research” – research done on the front end to inform the plan’s direction and is not considered summative or evaluative…so you need to recommend something else that is qualitative, too, which will be designed to measure campaign IMPACT. This could be hypothetically interviewing the same stakeholder again after exposure to your social marketing plan, to see if it worked.)
- If you are recommending a survey for your quantitative component, please draft 5 questions in Likert-style format (1-5 strongly agree-strongly disagree scale)
- How results will be interpreted, and when and how they will be available•
Go back and include aspects of evaluation and monitoring in the timetable and budget developed last week to account for various key milestones in monitoring/evaluation and associated (estimated) costs. Submit the new, updated budget with your weekly assignment.

