[Skip to Content]

Accreditation–Standard 3 Evaluation

Developing Your Program's Evaluation Plan

Continuing the baking theme from the last curriculum blog – Have you ever watched The Great British Bake Off (GBBO) on PBS?

It is an annual 10-week competition where amateur bakers demonstrate their craftsmanship with cakes, breads, pastries and desserts.  The contestants assemble for a few hours in the chilly British country-side under a large garden party tent that is outfitted with complete kitchens, one per baker. Here’s how BBC describes the show: “In each episode, the amateur bakers are given three challenges: a signature bake, a technical challenge, and a show-stopper.”  When the bell rings, the harried bakers stop all preparations and present their products for evaluation by two experts, a renowned cookbook author and a top artisan baker. The judges inspect, taste, literally tear apart, and comment on every aspect of the baked goods. As you might imagine, there are all kinds of baking disasters, competitive upsets, and amateur chef meltdowns mixed with serene perseverance.  For entertainment value there is running commentary by two comedians who take notice of every unexpected event.

Let’s put this in context.  GBBO is a TV show, so it is not the same as running a training program.  However, there are many commonalities. The curriculum is established – 10 week’s duration with weekly assignments to demonstrate relevant skills of increasing difficulty.  The facilities include consistent sets of state-of-the-art equipment and the same panel of expert judges.  Everyone has the same amount of time and resources to prepare and complete the assignment.  The weekly formative evaluation is derived from the mission of the show; reflects the broad performance goals of  the show; and is based on a consistent set of expectations such as the product’s appearance, technical difficulty of the recipe and the baker’s embellishments, creativity, taste, texture, etc.  The contestants are ranked from best to worst.  At the end of the 10 weeks, there is a summative assessment of overall excellence.  The contestants are ranked and the winner chosen.

The evaluation / assessment is based on the goals and objectives of the curriculum (successfully creating 3 products per episode of increasing difficulty that span the domain of baked goods).  The evaluation is formative and summative and based on established measurable and observable criteria of acceptable performance. There are multiple, expert raters.  Having multiple raters provides an estimate of reliability (reproducibility or consistency in ratings). The conclusions about the bakers’ competence are considered to be valid.  The validity is based on a synthesis of measurements (taste, texture, appearance, etc.) that are commonly accepted, meaningful, and accurate (to the extent that expert judgments are accurate) indicators of excellence in the baking community.

What is not evident to the viewer is a specific feedback loop for the participants and for program planning.  Since this is a televised competition and not an actual baking course, a formal program evaluation feedback loop is highly likely to be in place as a mechanism to assure continued viewership.  The existence of a formal participant feedback loop is unknown.

Bringing this back to the NNPRFTC Standard 3 – Evaluation.  Standard 3 – Evaluation, identifies specific areas of the NP program that should be regularly evaluated to see if all the components of the program’s unique mission, goals and objectives are being met.  It is important to link the training program’s evaluation with the elements in Standard 3, since these elements are based in best practices in training.  So the Standards should serve as the base for developing your programmatic evaluation plan.

Program evaluation plans should be designed to answer questions about effectiveness, productivity, communication, and capacity building.  As with the baking show, in training, program evaluation and individual assessment provides evidence about trainee performance in the context of explicit parameters. Programmatic evaluation can provide evidence to answer core questions. Are participants learning the rotation specific skills and knowledge needed for success in complex care environments?  Are trainees, preceptors and faculty getting the feedback they need for optimal performance?

Evaluation should be formative, summative and shared. Evaluation plans should specify the quality and quantity of task completion, within a specified period of time, and with specified resources.  The criteria for success should be clearly documented and based on predetermined, known standards of excellence.  Having a written program evaluation plan provides a mechanism to communicate important outcomes to all the stakeholders and provides specific detailed protocols for participants.

Having consistent evaluation and assessment protocols allows for meaningful comparisons and for the development of data-based plans for the future. The quality and meaningfulness of the evaluation/assessment depends on the quality of the evaluation plan, the quality and quantity of the data (observations), the judgments drawn from analysis of the data, and personal (programmatic) value systems.  To be credible, data must be reliable (reproducible) and valid (accurate).  This means also that the data must also be observable and measurable.

One example of a program that designed its evaluation plan to measure curricular goals and objectives is described in a 2016 article by Rugen, K.W., Speroff, E., Zapatka, S., & Brienza, R. Their NP training program evaluation plan, including a new standardized competency tool, was implemented in an inter-professional primary care NP postgraduate training program within the US Department of Veterans Affairs. The competency tool confirmed that the NP residents improved significantly over a 12-month period and identified areas for program improvement, including focusing on strengthening differential diagnostic abilities of the NP residents as they progress through the residency.

Standard 3 provides the anchors for your plan — the outcomes that should be measured on a formative and summative basis.  From an accreditation perspective, the success of the plan is determined by the quality and relevance of the observable evidence with regard to the Standard.  As the CDC writes: “An evaluation plan is a written document that describes how you will monitor and evaluate your program, so that you will be able to describe the “What”, the “How”, and the “Why It Matters” for your program and use evaluation results for program improvement and decision making.”  The important thing is to develop an evaluation plan that encompasses all aspects of your training program.

Resources: There are some well-designed, user-friendly resources available through the web for developing an evaluation plan.  Some selected examples follow.

The Pell Institute has an excellent user-friendly toolbox that steps through every point in the evaluation process: from designing a plan, to data collection and analysis, to dissemination and communication, to program improvement.

The CDC has an evaluation workbook for obesity programs whose concepts and detailed work products can be readily adapted to NP postgraduate programs.

The Community Tool Box, a service of the Work Group for Community Health at the University of Kansas has developed an incredibly complete and understandable resource that offers theoretical overviews, practical suggestions, a tool box, checklists, and an extensive bibliography.

Another wonderful resource, Designing Your Program Evaluation Plans, provides a self-study approach to evaluation for nonprofit organizations and is easily adapted to training programs.  There are checklists and suggested activities, as well as recommended readings.

In summary: the purpose of Standard 3—Evaluation is to ensure that programs engage in systematic, thoughtful, meaningful evaluation.  One of the keys to success is to have a written plan and then to follow it.  There are different methods for designing evaluations – the links to several tool boxes have been provided.  Find one that fits your program’s culture and approach, then use it.  One of the great things about evaluation plans is that they are meant to promote positive change.  If you get formative data and it doesn’t make sense, review the tool.  Review the plan.  Revise your approach – tweak it until it works.  Make the evaluation plan fit the needs of your program and your stakeholders.  The important thing is to own it and make it an essential component of the program. Make it yours and anchor it in the accreditation standards.

Let’s close with a thought provoking quote from Tariq Ramadan, a Swiss-born contemporary philosopher and author who is on faculty at St. Antony’s College, Oxford University: “Clarity and consistency are not enough, the quest for truth requires humility and effort.”

Until next time,
Candice