How Insurance Works

RSA 911 Data, Common Performance Measures, Program Evaluation, and Quality Assurance


>>Welcome and good day for the next session
in our PEQA advanced workshop series of 2019, as you can see and been looking at the title
for our session today, couple of pieces of information. First, again if you have technical questions,
sound questions, inability to see certain things on the screen, please put that in the
chat. We will have a question and answer session
after our presentation is completed. However, if a question should occur to you
while we are going through the presentation, please feel free to put that in the Q and
A box, that way we can differentiate technical questions from content questions. We also at the end will talk about how to
obtain CRC credit for this workshop, evaluations on the workshop which are helpful for us,
obviously, and we will have some other announcements as well. Without further ado, I’m Terry Donovan with
PEQA. Again, we are headquartered at the University
of Wisconsin Stout within the Vocational Rehabilitation Institute. Some of you that may have been on workshops
in the past have background as independent consultant and working with Minnesota VR. I was also with the workforce with win tech
prior to my coming here to Stout to work on technical assistance center work here. Other presenters are Cayte Anderson and Tim
Tansey. Cayte, would you like to say a few words about
yourself?>>CAYTE ANDERSON: Sure, thank you, Terry. Good morning, everyone. My name is Cayte Anderson, many of you know
me already through previous interactions and work. But I have the pleasure of working on the
PEQA project serving as project director and PI. I work for the University of Wisconsin Madison,
but also have the pleasure of continuing work with the Stout Vocational rehabilitation institute. Dr. Tim Tansey, our colleague, also works
closely with us on PEQA. He was planning to join us today, but unfortunately,
had a unexpected event that arose. He will not be on the call. But Tim also works closely with Terry and
I and other members of the PEQA team on the advanced workshop series specifically. Thank you for joining us. We are looking forward to the discussion today.>>TERRY DONOVAN: Great. Thanks. What we are going to be doing today is 2 parts. I’ll be covering part 1. Cayte will cover part 2. Some of the pieces and one of the pieces that
I thought about and how we are approaching this, and probably because it’s an advanced
workshop, are different ways of thinking about how to get ready to do your evaluation, and
incorporating some newer types of approaches that others are practicing. So again thinking about what your customer
promise and the impact, maybe some key value drivers, data that I need to understand if
I am delivering on my customer promise. What key operational decisions that you could
make if you had your data in hand, and again part of this goes to at least for many of
us we do program evaluation, data collection in order to have better decisions, make better
decisions, and to understand how the work that we are doing is or is not having the
effect or impact that we thought it would have or that we expected it to have. The second part, Cayte will talk about performance
management, logic models, evaluation, quality assurance and using data to inform stakeholders
but again bringing the wealth of experience and research background that Cayte has to
the overall conversation. When I talk about client promise, and there
is a fair amount of text on the page, but it’s generally coming down to, you can say,
you can substitute what is our mission, what are we trying to accomplish with clients when
they meet us with VR, as an example like you might say we want high quality VR services
provided to eligible persons in order for them to obtain competitive employment, fairly
straightforward. What drives that, what are the values that
would underline that? As you think, what is it that we want to have
happen, and in some cases what is the agreement between us and the client that comes, we will
do X and the client will do Y. In order to make that promise, one piece is
we think as an example we would need easy access for clients. We would need a reliable and predictable service
for our clients. We think that is a particularly high value
driver. You might say rapid service, we may, from
some previous evaluation, begin to understand from conversations with our consumers that
it was taking too long for them to start service or to get through the application process,
or to meet up with their community rehab provider. So part of it may be that we want to see a
more rapid service, you could say a rapid engagement. Then what kind of data might you need around
those value drivers? Again I’ve laid out some possibilities, and
there will be many, many more that some of you will come up with. We could have what are the observations from
our customers and our stakeholders. For many of you that could be your consumer
satisfaction surveys, and it can be other ways, and I’ll mention a couple of other possibilities
and Cayte will as well about how to approach that. You may have something like quality assurance
scores, or within your quality assurance process, let’s say Cayte’s review, that may be a way
that you identify say reliable and predictable service might be in terms or even rapid service
may be as you review cases, but as you look at other aspects of quality assurance, and
elapse times for different steps in the process. Many if not all VR programs are routinely
looking at how long does it take for people to get through, and what is the average or
what should be the norm. You may want to know what types of service,
you may want to begin differentiating, we are very good around these two types of services,
but we are not sure about these other types of service. You may want to differentiate. That might be some needed data in this. The other part that is beneficial, I believe,
when you are starting looking at program evaluation and looking at your data and the performance
measures, what are the decisions that this data will let me make? Beginning to think ahead of time, how are
we going to use it, and being as clear as possible. Do we need to improve how clients will access
VR, that is a decision we may want to make. Perhaps we find or we may start trying to
determine if we need to provide training or reinforcement to follow some approved standards
of service that previously we may have in our organization, have come up with some specific
steps around what we believe would deliver reliable and predictable service, and if we
think that it’s not going quite the way we thought, again one of the decisions that may
help us is it a training issue, or is it another issue around that. Then are there changes we may need to make,
additional changes we may need to make in the process, so clients can move through the
system faster. Again, from at least another perspective about
thinking about program evaluation is starting with what’s the promise that we are making
for clients, for clients that walks in our front door, what can they reasonably expect,
and then how can we go about assessing that. One way to do that, and many of you probably
have done process reviews within your organizations around lean processes, so this would be similar
in some ways, it might be a little easier to do, depending on your perspective. What you are talking about in your client
journey is, you begin by describing each stage of the client’s journey through VR. And that can be as few or as many as you would
like. But then the other part to begin thinking
about in terms of, and part of what we are talking about as we work through this is the
issue of data collection, from the time that it takes for data collection, the 911 data
collection clearly presented additional time to manage and continues to have additional
time demands on agencies. But beginning to think what data is collected
at each stage, even with the 911 data of all those data elements, when in fact is that
data collected. What part are collected at each stage, which
can in itself begin to give you an idea of are we collecting this at the right point,
or is this the best place. Are we overloading certain steps in the process
with data collection. Are there options to spread it out. Maybe, maybe not. The other part to begin to think about is
what are the clients and in this case VR agency touch point, when does the client and the
VR staff member actually see each other, actually connect with each other. It could be text, it could be phone, it obviously
could be face to face, but within the journey, when are there points when contact is made
between the client and the VR agency, and again it can cover the gamut from the counselor
or a tech or administrative support person. The question to think about then with the
client journey map, are there touch points where it might make sense to collect an additional
impact data. We will talk about that a little bit. Or to begin collecting certain types of pieces,
some of this is drawn from some of the international activity that is going on with program evaluation,
but again for organizations who have very little resources or almost no extra available
cash which defines many of the current VR agencies in many ways, have begun experimenting
with collecting impact data and evaluation data when they have somebody on the phone
or when the client stops by in the office. So rather than a follow up survey or rather
than a follow up situation, many are looking at when I have the individual in front of
me or on the phone, or via text or something along those lines, that might be a prime opportunity
for me to collect either some of the data that I need, or some even additional impact
data. That is part of the piece around beginning
to think about touch points, not advocating this but beginning to mention how some are
beginning to think about what do we collect. The other piece that folks are looking at
although for VR there are 911 data requirements but many organizations are also beginning
to think and that goes to the client promise, what is the absolute minimum amount of data
that I have to collect, or that I can collect. There are different strengths with 911, no
question about that. But if there are other elements that you are
thinking would be beneficial in order for you to answer questions around activities
that are occurring, how do you begin to boil that down to the absolute essential questions
or information that you believe that you need. Some of those impact questions that folks
are taking a look at have to do with what are some of the core insights from the client
about service received. Again, as I noted some are beginning, some
at least international organizations are looking at ways of how do we collect that information
when the client is in front of us. Do we ask them a couple questions right away. I suspect some of you are probably doing that,
it may be interesting to hear some of you have started to formalize that type of piece,
so instead of doing the client satisfaction or client engagement after the fact or separately,
is that becoming part of a data collection piece, part of an interview or part of a monthly
check in with the climate. We talked about customer satisfaction and
what that could look like and impact question, and some of this goes back to tying this information
into what is it that we are trying to make happen for clients in our organization. Meaningfulness, another way of describing
or could be a new piece to think about, what has been the effect of voc rehab on their
lives or on their situation. That may be a piece that comes out in case
notes, may come out in other conversations, but do you begin thinking do we have, do we
begin asking more concrete pieces about what has been the meaningfulness to the individual
for their participation in VR. It may come out in other ways. Questions around social outcomes, has there
been an improvement in the individual’s general income level, and some of that may or may
not show up in the 911 data. If it’s helpful to understand that the quality
of life is improving because of services from vocational rehabilitation, and again many
of you have been in this way longer than I have. I’ve been in it for probably ten years now
at various points, and in some ways we may get locked into just the data and just what
that is looking like, stepping back and some of you may have done these pieces again already,
what is in a broad sense the social impact of all of these VR services that are being
applied to the individual. The employment is the outcome, the performance
measures and those, but what, if my median income goes up what is the social outcome
of that, what is the impact for me in terms of my feelings, my sense of self esteem, my
sense of worth, how might that be spreading through the family, and how is that having
a greater outcome, a greater impact over and above the fact of employment. Again, the impact questions on employment,
perhaps looking at wages. Marketing and from this standpoint how do
you, how are you getting people and it goes back to the question of easy access, and sometimes
that is a marketing question, how do we help clients understand what it is, what it looks
like, the easy access piece. One of the other technical assistance centers
that Cayte and I are involved in is targeted communities and within that is a piece we
are trying to discern for individuals living in extreme poverty and with disabilities,
what might be some of the barriers and how can we better help people understand that
VR is there, VR is available, and we would like to be of service to them. Those are some of the other impact questions. How are you getting people to come to you
and request service, understanding waiting lists, understanding those types of things
but what is the understanding. Then the question of customer, of personas
and some of that dives in with the meaningfulness. What are some of the hopes and dreams that
people have had, how will VR, how do you believe VR will impact how a person overall feels
about themselves after finding a job, after getting additional skills, after working for
a period of time, after getting a promotion after a period of time. How is that beginning to affect both individual
clients and conceivably what does the mix look like that we are after. Some of that can come back to are there certain
types of clients, certain parts of your states, certain types of disabilities that you believe
may be underrepresented in VR, and you would like to get a broader mix of that. After you have discussed impact questions,
then what kind of data do I have? We have talked about we have 911 data, we
have common performance measures. You can look at this. What data do I have, which then can lead to
the follow up question of then how do I use the data that I have to help me answer some
of the questions that may come up. With the 911 data, and some of the categories
what I’ve introduced, within meaningfulness, if you opt to take this general approach,
the 911 data among other things can clearly help you with the types of services that are
being used or referred or suggested to a client, and the frequency of how that is occurring. The 911 data can also help with progression
through VR. You know these pieces. But it is, I mean in the past, probably before,
not for everyone but in many ways the advent of the performance measures quarterly reporting,
and the monitoring, has increased the potential and the actual usefulness of the 911 data
to help programs understand what is happening within their program. The marketing, depending on how that information
is collected within the 911, you have referral sources to and from. You may start looking at, I know it has come
up at different times, beginning to look at our CRP sort of successes, who seems to be
within our CRPs and being able to use your 911 data to let you know generally that some
CRP and you know this probably, many of you would know it from observation, who seems
to do better, which ones of our CRPs do better in finding employment, helping people gain
employment, helping people retain and stay in employment, compared to others. How might that affect how you allocate resources
as much as you can. Then I talked about the issue of customer
personas, you have a slew, a large quantity of client descriptors and background information
that are currently available in the 911, as well as beginning to collect both at interview
on what is the background, what for example is a low income question that is on the 911
piece, and then the request, and when cases close out, what was the wage when the individual
left and beginning to look within the performance measures of the median income second quarter
after exit, median earnings second quarter after exit. But beginning to give you an idea then of
what do our customers look like and how are our customer changing. Are we, when we have a target in mind what
we would like to see for mix and progression and impact, the 911 data can help you do that
clearly. Within social outcomes, at least around the
performance measures, I would suggest particularly around social income things like the measurable
skills gains and credential rate, one of the underlying reasons of WOIA that is being tracked
by VR and labor for a number of years that it was believed
(coughing). Excuse me, we will find out if increasing
the skill level of our clients, of our VR clients, and increasing the rate of recognized
credentials among VR clients should improve the likelihood of employment, and conceivably
would improve the likelihood of sustained employment. That can be viewed as indicators and steps
in the process to a good social outcome for a client of increasing their skill level,
increasing ideally their confidence, adding things like recognized credentials that then
are transferable, which gives the individual ideally a greater flexibility both in its
original job search and a ongoing employment and career development. Then the whole question of employment, second
and fourth quarter after exit, second quarter median earnings, it goes without saying. That will help employment and that will begin
to give you an idea of the impact which has been one of the key elements of the new kind
of performance measures. Then you pull extra data, and as we noted
earlier, what additional data do you need. You looked at your client journey, touch point,
you have current 911 data, many of if not all have customer satisfaction, you have been
doing needs assessments or you have begun or are partway through or finishing up comprehensive
statewide needs assessments. There is a vast amount of data. There may not be an additional data that you
need. But if there is, based on the impact that
you are trying to achieve, some of the newer pieces, which technology is best for collecting
data. That is always a good question to ask as you
are collecting data around program evaluations. How do we collect our data? Is it a face to face? Do we begin using something like texting,
surveys on cell phones for clients, huge penetration of cell phones in the U.S. population, and
some of the response rates that are starting to be collected for surveys that people receive
on phone can be two to three times higher than an E mail survey and for sure a mail
survey. You want to do phone calls to individuals
to collect it, or do surveys, as we noted E mail or other types of surveys. With that I will turn it over to Cayte who
will cover the second half of our presentation. Cayte, it is yours.>>CAYTE ANDERSON: Thank you, Terry. The second half of today’s presentation, I’m
going to continue to build upon the foundation that Terry provided during the first half. Our goal is to leave plenty of time for questions
and answers at the end. And any questions that we don’t get to today,
we will have a running document and we will compile those, provide the answers in writing
to everyone following the presentation. Thank you, Terry. As Terry had noted, there are multiple methods
through which you can gather data, etcetera. But within our programs, particularly within
our state VR programs, having the data is wonderful, but we want to consider that data
to be living and dynamic. One of the worst things we can do would be
to simply gather the data for compliance purposes, and spit it out into a report going to a limited
audience. Really we want to have the data coming in
through multiple formats, multiple modalities, and then using that data on a regular basis,
to help generate the knowledge. I still haven’t done this, but I’m determined
at some point to get buttons or pins made that say data is our friend. Really data doesn’t need to be scary. It shouldn’t be reviewed in a vacuum. In fact, you want multiple stakeholders to
be inputting data into your systems and then also having access to information coming out. Certainly understanding that we need to have
proper protections and security measures in place to protect our data, but then what do
we do with it to essentially transfer, transition it into knowledge in usable formats, so that
it can be put into action, to help us not only monitor our programs but to continually
improve our programs, and enhance our outcomes and impact. Terry, I’m not able to there we go. Sorry. There is a little delay with my screen advance. For those of you that are already familiar
with this document, great, but for those of you that are not familiar with the institute
on rehabilitation issues, the 36th IRI, we strongly encourage you to look at this. The IRI was funded through the rehabilitation
services administration and the 36th IRI was coordinated and published by our colleagues
at the University of Arkansas through their currents program. The 36 IRI focuses on performance management
with a emphasis on program evaluation and quality assurance in vocational rehabilitation. It came out in 2011 but it still serves as
a strong foundational document. If you haven’t seen it yet, we recommend that
you, you can Google it to find it, obviously, or we can make that available as well following
the webinar today. But we pulled this quote out of the 36th IRI,
I love it. Vision without measurement is just dreaming. Measurement without vision is just wasted
effort. You need to have both the vision in terms
of what you want to achieve, where you want to go, as well as the measurement to be able
to measure progress as well as the outcomes and impact so that you have clear data to
demonstrate when you have achieved your goal. This next slide, performance management, again
we borrowed this graphic from the IRI, but it does a beautiful job of illustrating the
components involved in performance management. We talk about performance management, really
it’s not one static thing, but performance management should be active and dynamic. There are really two core components of performance
management. First of which is program evaluation, and
program evaluation focuses specifically on acquiring the data and the metrics associated
with measures, analyzing the data and reporting results, so that you can use the data to make
decisions about policy, practice, service delivery, etcetera. Within program evaluation, it involves the
components of designing your evaluation, what is it, what data do you need to collect, how
are you going to gather it, what are you hoping to learn, some common questions in the design
phase involve are we serving our customers in the best way possible. Are we meeting our standards. Are we going beyond compliance and looking
at a more involved comprehensive system. Then also how can we encourage innovation
and yet develop measures of accountability. Within our programs, how do we encourage and
empower our staff to think creatively and continue to innovate while still having accountability
measures within our programs. That can apply to our vendors and service
provider partners as well, the design. Data collection, how are we gathering our
data? Terry mentioned a couple of techniques that
we have been finding has been helpful in terms of touch points with the consumers, texting,
etcetera. Another element to consider during the data
collection phase is cultural responsiveness. Are our methods designed in a culturally responsive
manner. Many individuals we are now serving in our
public VR programs are living at or below the federal poverty level. Many are also living, experiencing multi generational
poverty, which is from a cultural perspective has different considerations than individuals
that may be experiencing middle class status. Really ensuring that we have a culturally
responsive approach for evaluation. Data analysis, once we have the data, what
does it tell us? Utilizing some of the rapid cycle evaluation
approaches, that we don’t just gather data, let it sit there, let it build up and then
work with it all at once. It is good to work with our data on a regular
basis, so that we have a sense of what is going on point in time. Then of course reporting the results, and
that you may need to report differently to different stakeholder groups, the type of
report and the formatting that you share with legislators might look different than the
way that you report data to your state rehab council, which may look different from the
reporting that you provide to RSA. Granted, the data should be consistent and
it needs to be accurate. But you might need to change the formatting
across audiences. The second core component involved in performance
management is quality assurance. That is looking at the quality of the data,
so you could have a lot of data in your system and I know that many of you are actively working
on this in your programs, you want the data in there but what does the data tell you? Is it meaningful? Is it accurate? Now with the WIOA data elements, many of the
programs that we work with are becoming more familiar with the data and are actively working
on improving the quality of the data in their systems, right. Ensuring standards of quality are met, collective
actions, that can have a negative connotation but we think of corrective action as a positive
proactive measure. You don’t need to be reactive in terms of
looking at the quality. If you are looking at your data on a regular
basis, you can pick up on patterns or trends and be able to address those rather quickly. It is a wonderful opportunity for administrators
to work directly with counselors and staff in the field to find out why there might be
gaps in the data, or if there are inconsistencies, what is going on. Often it’s field staff that can provide the
best input in terms of how, what is going on with your data. We encourage you to engage folks in the discussion. Then evaluation of the actions taken, and
again this does not need to be viewed as a negative component, but in terms of corrective
actions what is going on with our data, what does it look like, is there anything that
we need to change? Then just the follow up, closing the loop,
circling back to ensure that we are looking at it again to see if the change did happen. Overall, the goal with performance management
is continuous improvement through systematic constructive action, a dynamic living process,
if you will. You need to have both program evaluation and
the quality assurance elements in place, to have a truly comprehensive performance management
approach. Some of the key questions that you may have
during your performance management process, what are the goals or outcomes? What do you want to achieve? Who are the stakeholders that should be involved? Terry mentioned some stakeholder groups earlier,
but certainly consumers, families, service providers, staff within the VR organization,
in terms of field staff and, field staff, counselors, management, administrators, it
can be helpful to get information across multiple levels or tiers within an organization. What information do you need, and oftentimes
it will be the stakeholders that help you identify what information is needed. Again, remembering that culturally responsive
evaluation can be tremendously helpful in terms of identifying what information is needed. How do we measure progress and outcomes? Once we identify the questions, the goals
and outcomes, who needs to be involved? What information do we need to gather. How do we measure our progress so that we
know when we are achieving our goals, and/or if we are falling behind a little bit on achieving
our goals, what is going on that is preventing us from moving forward as quickly as we would
like. We need to define success, because again if
we are only measuring but we don’t have that vision or if we only have the vision and don’t
have the measurement, we really don’t know how to adequately define success. When and how are measurements presented? To whom should the measurements be provided? Those are key questions, again back to multiple
stakeholder groups, multiple audiences, attempt to meet the needs and finally what is our
baseline. Where are we starting and where do we want
to get to. Some of you may be familiar with the Baldrige
program, and/or you may be Baldrige recipients yourselves. The program at the University of Wisconsin
Stout is a Baldrige site, and for those of you that may not be familiar with Baldrige,
it is a national quality program on performance excellence. It really is a comprehensive structured approach
to performance, we use the term performance excellence but also which encompasses performance
management. It is established that a federal government
in the late 1980s, and really was designed to enhance competitiveness, quality and productivity
of organizations within the United States. They have different groups that they focus
on, tailored to three specific areas. One is business and nonprofits. The second is education and the third is healthcare. But within the Baldrige, it’s a quality of
work but within the Baldrige structure for performance excellence they have 7 key categories
for measuring performance excellence. The first is leadership, also strategic planning,
customer focus, so you always need to include the customer focus, who are we serving, are
we achieving what we need to achieve in terms of outcomes, success, and high quality service
delivery. Fourth area is measurement analysis and knowledge
management. The fifth is the workforce focus. 6th, process management, and then finally
the 7th is results. If you are interested, this is just a introduction
to Baldrige as one example of performance excellence program. We provided the link and you are welcome to
check it out. It’s a full continuum. To become Baldrige there is a whole process
and fees involved. However, the website has many excellent resources
available that are free of charge, available to the general public. You can download them and put them into practice
in your organization as you see fit. When it comes to performance management, of
course we have performance measures, I’m not going to spend too much time on this. But when we think of performance measures,
there are multiple phases. We have process measures, progress measures
and outcomes. Oftentimes we refer to this as either formative
evaluation or summative evaluation. Formative is focusing on the process, and
then the summative is outcomes. Are we meeting our objectives. There we go, thanks. We have some information today on logic models,
and how this gets into program evaluation. We are not doing a deep dive into logic models. But if you are interested in more information
on this, we are happy to provide it. We have monthly discussion sessions, Terry
typically facilitates these and does a wonderful job. I’m happy to join for future discussions,
if you think it would be helpful to work through the logic modeling, specific to vocational
rehab in the future. Logic models if you haven’t had an opportunity
to work with logic models yet, you are in luck because they are coming. They are picking up in popularity. They fit really nicely with strong program
evaluation models. Oops, can you go back one slide, please? Thank you. The basic logic model that we have here is
a linear process. Logic models don’t have to be designed as
linear process but the goal with a logic model is to have, develop a picture of how your
organization does its work, based on the theory and assumptions in a program. It ties in the resources and inputs, the activities
or action items, the outputs, so what are you actually producing or delivering that
then contribute to the outcomes and impact. As you look at the slide, the first two steps
are the planned work, which will then ideally lead into the intended results. The way a logic model, this particular logic
model is laid out and formatted, it is a series of if/then statements. If we have the proper resources and inputs,
then we will be able to conduct the activities, and if we conduct the activities, then we
will be able to produce the outputs. If we have the outputs, then that will lead
into the outcomes, etcetera. Many of you may be familiar with the Kellogg
Foundation, but if not, we have included the link here. The Kellogg Foundation has a wonderful repository
of program, strong program evaluation resources and materials, and they also have a specific
guide for developing logic models, which we found to be very helpful. This is all available on their website openly
and so again, you can click on that link. If you find the information helpful, you can
take it and use it. What I like about the Kellogg Foundation materials
is that they are designed, they can be used by anyone. We know researchers that are using these resources,
and we also know many program administrators, policy staff, and practitioners that are using
the materials. They are really designed in a user friendly
fashion, that any of us can take and put into action in our agencies. Next slide, thank you. When you are reading logic model, this is
a fairly basic logic model, but it’s designed, the resources and inputs are really the question
is designed as what do you need to accomplish your activities. What resources do you need to have in there,
that includes financial resources, staff, space, it may include community members or
service providers, transportation, etcetera. Moving to the second step are the activities,
the underlying question are assuming you have the resources and inputs, what activities
do you need to conduct. Activities could include training, data collection,
as Terry noted, the touch points, how do we want to collect the data, what is the most
effective, which leads into our outputs, the underlying question, what evidence of service
delivery is there? Outputs, examples include service deliveries,
but also service delivery but it may also be more administrative items, such as reports,
or reports to multiple stakeholder groups. When it comes to outcomes, this particular
illustration doesn’t, it just has one box for outcomes. You can split this into two if you would like
to, which are the shorter term outcomes, which are designed around what do we, what specific
changes do we hope to see in program participants’ behavior, knowledge, skills and status and
level of functioning, based on the activities that are provided and outputs that are provided. Typically short term outcomes should be attainable
within a one to three year time frame, whereas longer term outcomes are typically achievable
within a four to six year time frame. This is one of the things that it’s a important
point, in that many of our programs we expect to see outcomes immediately. I think it’s better when you can split those
out in terms of more immediate outcomes you would like to see those in the shorter box,
three years, others may take time, four to six years, and ultimately, the impact which
is the fundamental changes that we want to occur within our organizations, communities
or more the systems change work, really we can’t expect that immediately. That takes place over a 7 to 10 year time
frame. Oftentimes within the work that Terry and
Tim and I are involved with, grants and demonstration projects and whatnot, we don’t see the impact
immediately. Sometimes the impact takes, if we are effective,
we will see the impact beyond the scope of our funding for projects. However, within state organizations, you may
have better ability, to see some of that impact longer term if you are actively working with
your data, and you have a solid performance management approach in place with the evaluation
component. Next slide, please. This is an example logic model. I’m not going to run through it today. This one is again an excerpt from a Kellogg
guide. But this provides an example of a logic model
that is geared more towards healthcare, again through discussion session we would be happy
to run through development of a logic model more specific to vocational rehab if folks
think that would be helpful. Why use logic models in evaluation? First and foremost, good evaluation reflects
clear thinking and responsible program management. It provides a nice roadmap, something that
is very clear for folks to be able to take and use and understand what are you hoping
to achieve and how are you going to get there. It also serves as a visual representation
of how a program is intended to work. This is something that everyone, all stakeholders
involved can have a copy of, typically we like to see logic models that are one page,
if you can get it under one page that tends to be the most effective, one to two pages
works well as well. But logic models also connect program resources
and activities with the outcomes in a comprehensible document, which can be helpful to share with
legislators and administrators. Logic models are also, they also help identify
outcomes and help anticipate ways to measure them. Then again it provides one universal roadmap
for all stakeholders to have, and to use in achieving the goals of the program. Another piece that we didn’t get into much
here today, but when developing a logic model, it is really helpful to get input across stakeholder
groups, and then you end up number one, with a stronger logic model that is probably going
to more accurately reflect what needs to be done in terms of achieving the goals, and
then also too you get buy in from multiple stakeholder groups, which can be much more
effective in achieving outcomes. Using evaluation data, Terry touched on this
earlier, but when you have this comprehensive performance management package involving your
program evaluation and quality assurance, you want to use your evaluation data on a
regular basis. Don’t let it sit there and/or have just one
or two people looking at it on a regular basis. You want to be able to integrate it into the
culture of your organization. Using your data to modify the program and
improve service delivery, consider it a living entity, using your data for planning, also
providing information both quantitative and qualitative to legislators and other stakeholder
groups. It can help identify capacity building needs
that might be taking place. These may change over time. Recognizing stakeholders, because you may
also have some stakeholders or vendors that are doing a fantastic job, and that can come
through clearly in your data. It is not just a, don’t think of the data
as a gotcha. Think of the data as an opportunity for you
to understand what is going on in your program at a deeper level. It also helps identify professional development
opportunities, both internally as well as with other stakeholder groups. Using it to monitor and improve programs,
and of course using it to help inform a wide variety of stakeholders, including consumers,
advocates, councils, service providers and the general public about your program. One of the things I hate to say it, but over
the years, we continue to see that the public vocational rehab program is one of the best
kept secrets and we don’t want to be. Use your data to help tell your story. Finally, to reiterate, keeping stakeholders
informed and involved is critical. Keeping legislators informed is critical. Every program needs to have a elevator speech
which is the short message with the big impact, which helps us demystify the public voc rehab
program. What is your program, use your data to illustrate
your successes and your outcomes. Also again using that knowledge translation,
so using your data but making that available to help illustrate your successes in different
formats for different audiences, and get that information out there. We want people to know how successful you
are in your work. We want people to know the complexity of your
caseload, that’s all good information that you want to have available. Terry, I think with that, oh, yes, and use
it for future planning because as we all know, the only constant in life is change. Even though the logic model that we shared
today was designed in the linear format, really we know that these things are circular. As soon as you complete one project or achieve
one goal, you probably have multiple other projects and goals in the works. We are happy to share more on that at a future
date.>>TERRY DONOVAN: Thank you, Cayte. We have time for some questions. I think Jen has been kind of keeping track
of questions. If you would like to read the question someone
may have asked us.>>Sure.>>TERRY DONOVAN: Two or three before we are
done.>>Sounds good. One that came through, what are some strategies
to get other staff interested in evaluation?>>TERRY DONOVAN: Cayte, you want to talk
about that one first?>>CAYTE ANDERSON: Sure. The question is what are strategies to get,
was it other staff involved?>>Yes.>>TERRY DONOVAN: Yes, I believe so.>>CAYTE ANDERSON: Sure. So in terms of other staff I’m assuming that
could be within the public VR vocational rehabilitation program. But there are many strategies. But a couple that come to mind immediately,
with your comprehensive statewide needs assessments, make sure that you are actively gathering
input from staff within your agency across a variety of roles and responsibilities. I think that is important. You can do that quantitatively through surveys,
or you can do it through interviews, internal focus groups. Another method that we are seeing gain in
popularity are data parties. I know, and if you are not a evaluator you
probably think data parties? That doesn’t sound like a party. But it’s an opportunity to essentially have
your own small or community of practice around program evaluation and quality assurance within
your agency and meet on a regular basis, and bring in questions, comments, challenges,
and/or you do a data pull, and you look at it and work through it together, and say what
is this telling us? What are our questions, what is this telling
us? How does it look? It provides an opportunity for more people
within the organization to become involved with the data, and familiar with the data,
so that it, again, data is your friend. Data doesn’t need to be scary. Do you have any other suggestions?>>TERRY DONOVAN: One other thing sometimes
I’ve heard from a couple evaluators in VR agencies, they have started creating reports
that they think people should know and they started sending that around to supervisors
or managers. Generally you can use 911 data but more than
just counts, maybe interpretation or zip code differentiation, and they have reported back
that that seems to be getting interest of folks, that oh, this is more than just counts. Now I’m beginning to understand why I enter
gobs and gobs of information. So for some, instead of waiting for people
to come to them, they are sort of pushing the data out to others, over and above standard
reports. That is one I’ve heard from folks.>>CAYTE ANDERSON: Yeah, that reminded me,
there is along those lines, data visualization can be really great. Within your organizations, you probably have
a variety of different case management software packages that you are using. But data dashboards can be helpful, picture
is worth 1,000 words as we say, right? Being able to provide more point in time data
through data dashboards or visual reports, rather than longer narrative format can be
helpful.>>TERRY DONOVAN: Great. Thanks. I notice we have about four minutes left. We don’t want to keep you past the hour that
you signed up for. Jen, can you give a synopsis or description
of how people can get their CRC credits if they would like, the evaluations, and any
other announcements that you think would be good.>>Sure. First of all, I would like to talk about the
capstone projects, we encourage you to look at the capstone projects available on the
PEQA website under visit the capstone project hub. After successful completion of the on line
PEQA text certification courses participants implement a capstone project within the state
V.A. program. We have Margaret from South Carolina, project
titled comprehensive needs assessment for pre employment transition services, from Minnesota,
project titled a pilot study evaluating the effectiveness of person centered planning,
and David from Louisiana, project titled motivational interviewing, training effect and vocational
rehabilitation. We hope you take a moment to look at these. I added the link to access these in today’s
chat box. If you haven’t done so already, we encourage
you to visit the 2019 summit conference website, and I’ll add that link as well to the chat
box. This year’s conference is in Portland, Maine,
from September 4 through 6 at the Holiday Inn by the bay, and there is an early bird
discount available until July 15. Lastly, today’s webinar is worth one CRC,
and E mail is going out tomorrow with a evaluation link where you may request the CRC. You will be able to find the archived version
of this webinar on the PEQA tech website at PEQA tech.org as well as our YouTube channel. Thank you for joining us today. We hope you enjoyed the webinar.>>TERRY DONOVAN: Great, thanks, Jen, and
any closing comments, Cayte?>>CAYTE ANDERSON: None other than just to
thank everyone for taking time out of their busy schedules to join us today. In the evaluation, please if you have other
topics or other information that you would like us to engage in in the future, we will
be happy to spin out as many webinars and/or discussions as would be helpful. So feel free to send us your input.>>TERRY DONOVAN: I’ll note on the CRC piece,
this will be archived, we also have previous webinars that we did of the VR evaluation
coach with Todd Honeycutt and Todd also I believe in January had a presentation on using
pivot tables to analyze data, and he used examples around pre employment transition
services. Those are archived on our PEQA tech website
and could be available for you if you need a couple extra CRC credits, as the deadline
may come down for having a requisite number of credits that you need. With that, thank you, everybody for joining
us. We are done for today. Bye bye.


Leave a Reply

Your email address will not be published. Required fields are marked *