

From Baseline Data to Insights: Crafting Evaluation Plans for Health Programs
Community Health Management Plan Design
Tami Moser, PhD., DBH | Rating 0 (0) (0) |
Launched: Oct 25, 2024 | |
tami.moser@swosu.edu | Season: 2025 Episode: 22 |
Crafting a comprehensive evaluation plan is essential for ensuring that the impact of a health program can be accurately assessed. This involves clearly defining objectives, selecting appropriate indicators, and determining data collection methods.
SUBSCRIBE
Episode Chapters

Crafting a comprehensive evaluation plan is essential for ensuring that the impact of a health program can be accurately assessed. This involves clearly defining objectives, selecting appropriate indicators, and determining data collection methods.
Welcome back to another insightful episode of CHM Micro-Credential! I'm your host, Dr. Tami Moser, and today we're diving deep into comprehensive evaluation planning in program development. In this episode, we'll explore the essential components of effective evaluation, from collecting baseline data and comparing program impacts to uncovering unintended outcomes and leveraging statistical tools like JASP. We'll discuss the importance of using both quantitative and qualitative metrics, ensuring data privacy, and gathering ongoing feedback for continuous improvement. Join us as we unpack how thorough evaluation not only demonstrates value to stakeholders and funders but also fosters long-term program success and adaptability. Stay tuned as we share practical examples and strategies to elevate your community health initiatives!
Tami Moser [00:00:00]:
Welcome to the community health management design podcast. I'm your host, doctor Tami Moser. And today, we're diving into a crucial aspect of program development, comprehensive evaluation planning. As we near the end of our program design journey, it's time to ensure that our hard work pays off. Effective evaluation isn't just about measuring success. It's about creating a road map for continuous improvement and demonstrating the value of your program to stakeholders and funders. Let's start by breaking down the 3 key components of comprehensive evaluation planning. 1, selecting appropriate evaluation metrics.
Tami Moser [00:00:34]:
You've already done a set of outcome metrics and measurement, focused metrics. And so some of those will absolutely play into evaluation, but we wanna be a little more specific and focused in this area. we may be selecting more evaluation metrics that are appropriate for measuring the success of our program. Second, create data collection plans, and then 3rd, analyze program impacts. we've already addressed the first two of these to different degrees throughout your design journey. Now we're really gonna hone in on these and add analyzing for program impact. first up, let's talk about selecting appropriate evaluation metrics. The key here is to choose metrics that directly align with your program objectives.
Tami Moser [00:01:26]:
For example, if your program aims to reduce diabetes rates in your community, your metrics might include percentage of participants with improved a one c levels, number of diabetes related emergency room visits, participants self reported quality of life scores. And so there is a point where you will start collecting some separate data or dipping into data you've already planned on collecting and use it for a slightly different purpose. remember, a mix of quantitative and qualitative metrics often provides provides the most comprehensive picture of your program's impact. Now let's talk about creating data collection plans. The best metrics in the world won't help you if you can't find and collect accurate data. And we've talked about this to a degree as well. Right? That you need to know that you can get your hands on the data and find the numbers you need. But this is also about collecting accurate data and making sure that the data that's been collected in whatever methods that you've set forth or are tapping into that already exist, that the data that's in there is actually accurate.
Tami Moser [00:02:38]:
consider these tips when you're reviewing this. 1, use a mix of data collection methods. surveys, clinical data, interviews, and observational studies. When you mix these together, it helps give you a better picture. Establish a clear timeline for data collection. when does it start and when does it end? And that might be multiple times starting and multiple times ending. if you've got a design where you're gonna be bringing in cohorts, so people are gonna be moving through the program together and then they move out, then you have a much clearer established timeline for data collection for a specific cohort. If people can join at any time and move on out of the program at anytime, that may mean that you need to design your timeline a little bit differently.
Tami Moser [00:03:27]:
Ensure data privacy and security in your collection methods. this becomes important, but because this is health care and HIPAA is always such an important focus, this is an area you probably feel pretty secure in in terms of your organization. if you're using your organization's software packages, data is collected as it always has been, maybe in an EHR, then the data privacy and security, you can be very comfortable with because your organization has actually make sure and holds to a certain level of privacy and security. if you're, accredited by the joint commission, then you know that your data privacy and security should be up to the task. Right? But that being said, if in your organization, you're gonna be using or adapting or, I guess, I should say adopting software for your particular program, then you need to make sure that privacy and security can be met to the degree that you need it to be. maybe you don't need to be HIPAA compliant because you're not going to be housing actual health data in the system you design. Maybe it's gonna house something else, but you're going to get information out of it that's really important. maybe it's just the number of times the patients log in or out.
Tami Moser [00:04:57]:
It's a it's a self help portal, maybe. And so their actual health care data isn't in there, but they can get educational material and use it for multiple reasons. And in that instance, you don't necessarily need to be HIPAA compliant, but at the same time, you wanna make sure data security and privacy is in place and to what degree you can actually, protect it. And so you're gonna have to do some extra leg work. If you're using some of the marketing programs that we've suggested, for instance, the one I use can be HIPAA compliant. We can do emails out with it. There's different things you can do. Certain things you can turn on mean it no longer can maintain HIPAA level compliance, but it still is protected.
Tami Moser [00:05:44]:
You just have to make some decisions about what is necessary or possible here for you. And what when we say data privacy and security, what that means to you and your program and your data collection methodology, if you will. And then the last is to train your staff in proper data collection techniques. And so whatever design you have in place, however you're going to be tackling this, you wanna make sure that staff know where to put data, when data needs to be captured and put in there. it's really about everything you've built to this point, looking back at it and going, okay. How does everybody get trained on this? And I often look at it also as this is an opportunity to train people not only on the how, but the why. Why do we want this data? How does it, connect back to how we're judging the effectiveness of our data? How does this how does our choices have an impact on our ongoing funding, right, or the evaluation of the overall program, which could impact further funding once this is over. And so you do not assume that because the people that you have running your program are people that already work in your facility and already have access to the software you're gonna be using and they've been trained in that software.
Tami Moser [00:07:11]:
Don't just assume that they know how to do it appropriately. You really should put some at least baseline training in place to make sure that they can and are doing it as they should. And this is also a good opportunity to make sure it's standardized. you don't wanna forget also about baseline data. you set out the metrics. And in our past episodes, as we've gone through that and you started thinking about those metrics, and I, at the time, really did hone in on the fact that you need to do some research and make sure you can get those numbers and how you're going to get them. Once you've verified all of that, now at this stage in evaluation, it's time to go get baseline data for all of these metrics. Everything you're gonna use for measurement, whether it is just measurement on specific metrics as you're moving forward, process balancing outcomes, you also wanna look at the evaluation of your overall program and the plan that will make sure that you're looking at all aspects that you need to related to success.
Tami Moser [00:08:18]:
Now you need to go back in and go, let's get our baseline measures. what does this all look like now? And that's where you really need to start to measure how far you've come. You have to have the baseline, and this is also something people often forget. They're like, well, I can just go back and get a baseline once I've collected all the data for this, and I'm running everything. That's an assumption, and assumptions could be very dangerous. My pro tip is you go ahead and get your baseline numbers. You go through every metric you wanna use to evaluate the success of your program, and you go find, okay. As of today, what does that look like before we start? Now you may go, well, I don't want the baseline numbers more than a week ahead of when we're actually gonna go live with the program.
Tami Moser [00:09:06]:
Because maybe those numbers would shift on their own and either get better or worse in that week's time. And so you don't want just these numbers 3 months before you start. Right? Right now, as you're working on the design of the program, this is the stage where if you haven't already, you need to say, okay. Here are all the metrics I've outlined. Do I need anything else to help me understand whether our program is successful or not? And if I do, what is that? And this is the time to kind of design those additional metrics. And now, when are we gonna plan on starting the program? when would I want a baseline measurement for all of these for an actual start date? And I would say start with don't go well. We're gonna start it November 1st, December 1st, January 1st. we just need it December 27th.
Tami Moser [00:10:06]:
Instead, go. I would we would need to pull the baseline numbers a week before we start the program. then if your start date moves some, because they often do when you're planning things like this, things come up. You you need to figure out additional items before you actually start. That week before tells you exactly what you need to do, or maybe it's 3 days before we want those baseline numbers before we go live because then we're sure we're going live. Right? But right now, what I would recommend is you go in and get the number so you know you can actually pull the number, whatever that looks like for the different metrics, but you're not really capturing a baseline now. Now what it might give you, though, is better understanding of what everything looks like now, which can help create a stronger case for funding the program. you might actually go, I need these measurements now because I'm gonna feed them into my discussion of the overall problem.
Tami Moser [00:11:13]:
And now you can go back to earlier components you finished and update those numbers or put them in to make sure that the need for this program is really supported by the numbers of that are actually tied to and related with those who would be targeted for the program. keep that in mind. I mean, this is this is an important part. If you can't get your numbers and you can't measure, you don't know whether you've done the job you set out to do or not. And finally, let's discuss analyzing program impact. This is where you turn your raw data into actionable insights, and this is really the turning point for evaluation planning versus pulling metrics and watching them as we go to start seeing some improvements in individual patients. Right? This shift from the individual patient, which is very important to the actual program's application. Right? When I'm designing when you're designing these programs, you've got kind of 2 tracks you have to keep in mind at the same time.
Tami Moser [00:12:18]:
And as a practitioner, a provider of health care, you do have to shift your focus because when you're a provider, you're dealing with the patient. Right? You wanna help the patient. You want this patient's numbers to shift. You want their a one c to get better. Right? And so as a practitioner, that and a provider, that's what you're focused on. And when you're looking at charts, you're looking at the patient's charts not to get a clearer picture of that population and the community. You're looking at that to help the individual patient. That's what it should be.
Tami Moser [00:12:53]:
Right? But when we're moving to community health program design, what you're doing now is stepping back from the individual patient and going, I'm looking at the aggregate, the community as a whole, right, based on the criteria I put in place for my my community. however we're defining that. Right? I have a definition for what my community entails. that goes back to the healthy kids kids ages 7 through 12. I have a feeling I I got those numbers a little messed up because I don't have it in front of me, but that helps me define, right, the actual community I'm working with. And so what I'm designing for is impact in the community as a whole. I'm not looking at the individual patient. When we take the program live and I'm pulling data raw data, I'm pulling raw data on each of the individual patients that are a part of our program or participants as a part of our program.
Tami Moser [00:14:02]:
But in evaluating the numbers, I'm not looking at whether a patient got better. I'm looking at how our community improved. And that's where you really start to look at turning your raw data into actionable insights. here are some key strategies to this. 1st, use statistical analysis to identify significant trends and correlations. If you don't have software for this, there is a program called JASP, j a s p. It's free. It's a free statistical analysis software.
Tami Moser [00:14:36]:
it is a match to SPSS. If you're familiar with that, you just don't have to pay to use it. You can look some tutorial videos, or, you may have connections with someone who who's a statistician and can run that or is very familiar with quantitative research, and they can run the numbers to give you some information about whether something is if the changes were statistically significant. Right? that's the first. Use statistical analysis to identify significant trends and correlations. 2nd is compare your results to your baseline data and any control group. some of you might design your program to actually have a control group. it's being designed more as a research.
Tami Moser [00:15:18]:
You're using a research methodology, if you will, to have that control group to actually be looking at that baseline data, control group data, and then the statistical analysis of your program's data. And in that comparison, you can get some really good information. But you may not have any control groups, and that's act absolutely fine. You have those who aren't participating, that may be still a part of you've offered it to them, but they're choosing not to participate. And you have your baseline data that you're using. Right? And that baseline data gives you a picture of everybody that fits that community's profile that would be offered access, and then you're gonna have your raw raw data about your changes on the other side with those who are actually participating. You wanna look for both intended and unintended outcomes. Now we've talked about balancing measures.
Tami Moser [00:16:10]:
the balancing measures are looking at specific things that you don't want to see a shift or a negative shift in, as you're moving forward with your program. Right? You want you've identified things that could be impacted negatively, but you don't intend for that to happen. And so we have balancing measures to help us with that. However, that being said, those are what you don't want to be impacted by your program, and you don't want an unintended consequence associated with those. you're paying attention to it. But there are always other things you can't even think of. You didn't even imagine that what what you're doing could have this unintended outcome. You can also have positive unintended outcomes.
Tami Moser [00:16:57]:
We didn't expect to get this, but look at it. an example with the healthy kids program because it's a family oriented education program, we're focused on the pop the community, really, of those children, their health being impacted positively. A positive unintended outcome of our program could be parents are losing weight, improving their, numbers, are healthier, are more engaged with their kids. That could be a very positive unintended outcome of that program. Not its purpose, but we like it. Yeah. We don't want that to go away. And then there are the unintended outcomes that are negative that we have to figure out how to mitigate.
Tami Moser [00:17:44]:
And then consider the long term implications of the findings that you have. When you look at the correlations, when you look at the comparative data between baseline control group, if you have them, and your participants, when you look at both the intended outcomes as measured by both evaluation metrics and outcome measures, balancing measures, process measures that you've chosen to use. And you look at unintended outcomes, positive or negative, that you need to address, you start to look at, okay. What does this mean for us when we run the program again? What does this mean for our program and our organization a year from now when we run 4 groups through. Remember, the goal isn't just to prove your program works, but to understand how and why it works or doesn't work so you can continuously improve it. Because that's part of what the actionable insight is supposed to do for you is create an opportunity to make sound decisions about moving forward and what that's gonna look like and where you can improve. You may find that there were no negative unintended consequences, and none of the balancing measures showed anything off. You were able to maintain those with slight shifts, maintain them where you wanted them to be, and you don't see any other unintended outcomes negative outcomes that have happened.
Tami Moser [00:19:08]:
There is always room for improvement. Right? Always room for improvement. And so how are you going to continually improve moving forward? And I'll use even this, micro credentials, an example. One of the things that you'll have and and hopefully remember from the very beginning video I sent out is that I want feedback. If you see something that you don't think is accurate, I need to know about it. If there's an area where you're like, I really don't understand x even with this, I need something else. I wanna know about it. If you would recommend additions to help people better understand or a resource you found that I didn't send you to that you think would be useful for others coming through going forward, I wanna know about it.
Tami Moser [00:19:56]:
Why do I wanna know? Because I can improve the program that way. if I have a goal of if you're pointing out something you think is wrong, that's fine. It doesn't hurt my feelings. We can all make mistakes. And I could definitely have made a mistake somewhere. Or perhaps you go, I wanna let her know that in our world, whatever our world means to you, the example that's being used or that she talked about, it wouldn't work that way. And so maybe she needs to acknowledge that there is kind of a caveat to what she's saying for people who work in x type of organization. That's useful to me.
Tami Moser [00:20:34]:
I don't get my feelings hurt about something like that, and that's that's kind of the point I wanna make. You should never get your feelings hurt if there are things you need to improve. And in some form or fashion, you've identified that. meone's brought it to you. Your data's telling you that. There was input from the providers of the program. And so this is another thing that I may mention in another one of the, podcasts for this week, but I'm gonna throw it in here too. Everything we've talked about measuring in terms of outcomes has been oriented toward those that are going through your program.
Tami Moser [00:21:09]:
Another group you really should capture data from about their experience is whoever's providing the program. I also would say you want to do some measures about the experience your providers or practitioners have had in relationship to your program. Did they and and I'm gonna use the word enjoy the experience. Maybe it would be better to say, did it bring more value for them as a member of your organization? Did they feel more connected to the work because they valued what they were doing? Do they identify any issues with process? I mean, even if your measures look fine. Right? The the data from your measures looks fine. You know, like, that looks good. It's not doing anything wonky, if you will. Your practitioners may go, but we don't like the way this is designed.
Tami Moser [00:22:09]:
It's making our jobs harder. Yes. Those that are going through the program are getting positive outcomes, and those measures look good. And we're great that's happening. But we're miserable in trying to deliver this. It's stressful. And it could be so much easier if. Right? Your frontline workers are gonna know that.
Tami Moser [00:22:29]:
, I would highly recommend that you also design some collection points and particular data points for those who are delivering. And that's another thing you wanna look at here because continual improvement could mean we need to work with those delivering the program to better improve their experience of delivering this program while maintaining the outcomes we're seeing from the participants going through the program. a little kind of twist on what we've been talking about. Now let's look at a real world example, the Millbrook Community Health Initiative, which we've been following throughout this micro credential. This health initiative implemented a comprehensive evaluation plan for their diabetes prevention program. Let's say that they selected metrics including a one c levels, body mass index, and participants nutritional knowledge, which in that instance, you might do a pre and post, or a pre, mid, and post for their nutritional knowledge. before you ever give them a class, midway through the classes at the very end of the program. I'll throw that out there.
Tami Moser [00:23:39]:
It doesn't necessarily have to be the way you design it, but that would be an option. They, meaning the program, collected data through regular health checkups, food diaries, and quarterly surveys. And let's say the analysis reveals that while a one c levels improved significantly, body mass index changes were minimal. This led to an enhance to enhance their physical activity component. I'm looking at that going, okay. We're seeing what we want to see in movement of a one c levels. Those have improved to a significant level. We're really happy with that change, but we're not making inroads into body mass index.
Tami Moser [00:24:20]:
And this is also we've we've defined as critical for our program success. when we look at our program and we consider this, the actionable insight is related to the physical activity component. Right? We need to demonstrate how evaluation can drive program refinement, and this would be one of those examples. I can take a portion of my program and shift it to get us outcomes in that area while not seeing a reduction of the positive impact we've seen on a one c level. that's the thing. Right? You want to keep the positives moving in that direction while improving in the areas that aren't going as well. And this actually gets to, I think, part of the art of evaluation and data driven decision making. And, of course, evaluation planning comes with its own challenges.
Tami Moser [00:25:23]:
One common misconception is that evaluation is only necessary at the end of a program. In reality, ongoing evaluation is crucial for adaptive management of the program. And remember that adaptive management of the program. You should not be afraid to adapt when necessary during a program's implementation. If something is not going well, you don't go, oh, well, we'll wait till the end. And then the next court that cohort that goes through, it will get better for them. Well, what could we do now to actually improve the outcomes for the people currently going through it? That's an adaptive management strategy. Right? Ongoing.
Tami Moser [00:26:06]:
Ongoing evaluation is crucial. Another challenge is resource allocation. Evaluation can seem costly, but remember, it's an investment in your program's long term success and sustainability. If you skip this part and you get to the end and you wanna go, oh, the program was great. We got great outcomes. Prove it to me. Well, you just need to take our word for it. It absolutely got better.
Tami Moser [00:26:27]:
And, you know, here, anecdotally, let me tell you this. Doesn't cut it. Right? Not only can you not prove to me it actually was successful, I'm listening that going, you didn't do your due diligence to actually manage the program effectively so you could evaluate appropriately throughout. And there's the balancing measure does you know good, really, if you wait until the very end to look at it and go, oh, that's bad. And at that point, you're already losing other patients' customers because that balancing measure was so important to the ongoing running of your organization. And people are leaving because it's bad, and you were just like, I will wait till the end of this to do something about it. Pay attention as you go. that should be part of your evaluation plan.
Tami Moser [00:27:20]:
How often for each of the metrics are we gonna look at that? And they may not all be on the same schedule. In fact, you may scatter, some of those so that you can look at 1 at a time or a particular you know, these 3 are gonna be looked at every month. These 2 are gonna be looked at every quarter. You know? they may be on a slightly different schedule, and that's fine. It depends on the metric and and how you view the need to actually see those results to adapt as you go. what's your call to action? Well, this module, I want you to draft an evaluation plan for your community health program. Include at least 5 specific metrics, outline your data collection methods, and describe how you'll analyze the results. Now I'm gonna add a little extra challenge onto this.
Tami Moser [00:28:15]:
Try to identify 5 specific metrics that are not already in your outcome balance process measure group. what we defined earlier, those are still they'll be a part of your evaluation plan. Most definitely. Right? You'll put them on a schedule and follow through with everything we've talked about. But I want you to see if you can find 5 additional metrics that you think would be important for your understanding of how well your program's operating in addition to the outcomes you've already defined. Maybe you can only think of 3, but your challenge is to see if you can identify 5 more. And identifying metrics and the things you want to measure can be challenging. Right? There are some, what I will call low hanging fruit metrics where you look at it and go, well, obviously, we're gonna have to judge.
Tami Moser [00:29:13]:
I mean, obviously, we need the a one c. sometimes the clinical data that you might wanna capture is one of the easiest things to define, and these other areas that help you monitor the overall success are things you haven't defined yet, and they're more difficult to define. that's your that's your challenge. See if you can identify 5 more metrics, but I want you to take everything you've already, defined in the past. Bring that also over here so that you're putting it all together as an evaluation plan. In our next podcast, we'll discuss how to use your evaluation plan as part of a compelling presentation to stakeholders. We'll cover crafting your value proposition and addressing potential challenges associated with this head on head on. thank you for tuning in to the community health management design podcast.
Tami Moser [00:30:06]:
Remember, a well designed evaluation plan is your key to demonstrating impact, securing ongoing support, and continuously improving your program. Until next time, this is doctor Tami Moser wishing you success in your community health endeavors.