Dr. ‘Bob’ serves as a Training Specialist for the U.S. Department of Homeland Security as well as an adjunct professor for the School of Business & Leadership at Regent University. Prior to coming to Regent, he served as a teacher for Virginia Beach Public Schools and was an instructional designer for mid and senior-level leadership courses for the DoD and DHS. Over the past two years, he has assisted with an international initiative driven by Edify Online to partner with MIT-World Peace University where he designed, developed, and facilitated Design Entrepreneurship and Strategic Business courses for students in Pune, India.
In case you couldn’t guess by reading the title of the article, the subject of math has never been a strong suit of mine. In fact, to this day, my mother doesn’t believe I graduated from high school. Nonetheless, I’ll spare you the details of my math shortcomings and get to it – simply asked, how are institutions measuring Return on Investment (ROI) for their students and beyond? Specifically, within the context of Kirkpatrick survey levels (or the four levels of training evaluation). Before continuing down this tumultuous path, it’s probably best to establish a baseline vocabulary and refresher of Kirkpatrick’s survey levels.
According to the folks at Ardent Learning, the Kirkpatrick Model is a globally recognized method of evaluating the results of training and learning programs. It assesses both formal and informal training methods and rates them against four levels of criteria: reaction, learning, behavior, and results.
Level 1: Reaction – measures whether learners find the training engaging and/or favorable. This level is most commonly assessed by an after-training survey. Many higher education institutions deploy ‘end-of-course surveys’ after each course to attempt to gauge how the learner ‘felt’ about the experience.
Level 2: Learning – gauges the learning of each participant based on whether learners acquire the intended knowledge, skills, attitude, confidence, and commitment to the training. In essence, institutions are utilizing formative and summative assessments to evaluate mastery of skills/knowledge.
Level 3: Behavior – measures whether participants were truly impacted by the learning and if they’re applying what they learn. Basically, this is where we should start looking to see if the education/training received is perceivably valued by the student and the organization/community which they serve – in tangible ways.
Level 4: Results – measuring direct results. Level Four measures the learning against an organization’s business outcomes. Simply stated, it’s gauging to what level the education/training impacts the business/community/industry at large.
Higher education institutions focus on levels 1 and 2 because it’s what the industry hangs its hat on and they are likely the easiest to deploy. We could spend hours discussing each level, but for the brevity of this article, here’s what’s really going on:
We’ve all attended various training events and are often presented with exit tickets or basic surveys. In higher education, those end-of-course surveys provide some relevant data, but they are often emotion-driven. The typical response rate is often less than desired and may contain exaggerated Likert scores – imagine the average student just checking the boxes to complete the survey. The students on the extreme ends of the performance spectrum (high and low performers) usually leave anecdotal comments that commonly provide little insight or objective data. Nonetheless, the survey was done – Level 1 complete.
Ah yes, formative and summative assessments. Obviously, a critical form of measuring mastery, assessments seem to be the bread and butter of the higher education industry. Tests, quizzes, papers, projects, discussions, etc. – the choices are fast and furious (some not so fast) but just because some can demonstrate some level of mastery, does this measure how effective the application will be in a real-world environment? Level 2 is sort of complete – now enter Level 3.
This is where the wheels start coming off the bus in higher education. In any industry, level 3s are difficult to conquer. Although it seems like a reasonable objective, ascertaining the level to which a student (who has completed some form of training and education applicable to their world of work) effectively performs a job is extremely difficult. Furthermore, even if a survey was deployed to a student’s supervisor, and they returned it, determining any measurable increase in performance as a direct result of the training/education gained is virtually impossible. Level 3 is not really complete.
On to level 4. Simply said…good luck! Objectively measuring a student’s learning in relation to the impact it has on business outcomes or community objectives is extremely challenging. Creating a data instrument that could capture such information is difficult enough, let alone the inherent barriers to achieving a meaningful response rate from external stakeholders. Similar to level 3 data, how can we show a direct correlation between the skills and knowledge acquired from education and the business outcomes of an organization? Level 4 is rarely attempted.
Now for some good news!
Understanding the challenges associated with objectively measuring how one’s learning directly impacts an organization is simply the beginning of the journey. You just have to start somewhere. The University of West Florida seems to be taking this approach to heart by including representatives from the students’ organizations (where they are employed) on their respective dissertation panels. Even though it’s not the traditional pathway for levels 3 and 4, it does include external stakeholders on the student’s educational pathway, which will directly impact their organizations. I hope their future plans include gaining level 4 feedback 12-24 months later to gauge the true impact (if any) on the organization.
In an ideal world, learning institutions would be able to show direct relationships between the education provided, the learning that has taken place, and the impact on the follow-on communities at large. Let’s face it, we don’t live in an ideal world, but we have to at least try! Not only does the sum of levels 1 and 2 not equal the equivalent of level 3, but they also don’t provide the institution, student, or the community (businesses) with objective insight on how to best refine the efforts. Don’t simply rely on Alumni Affairs to send out surveys to alumni to see where they are working. Take it a step further and see who would be willing to loop in with their employers to gain testimonies as to the effectiveness of the education your institution provided. Another strategy to consider is to work with your career and/or talent management office to build partnerships for internships and externships. Implement stipulations that part of those agreements would be for the organization to provide feedback (level 3) as to the readiness of the students that are employed by them. Most of the mechanisms to obtain level 3 and 4 data are already in place, institutions just have to be willing to leverage the relationships enough to get the data reported back.
I’m the type of person that asks a basic questions when presented with a challenge. Is it difficult or is it impossible? When confronted with ‘impossible’ I may consider going back to the drawing board to regroup and come up with another plan of attack. When confronted with ‘difficult’ I actually get more enthusiastic about the challenge! In the case of level 3 and 4 surveys and data, is it difficult? Yes. Is it impossible? No. What does the industry need? Are we meeting those needs? Levels 3 and 4 will get us closer to those answers, but we will never really know until we redo the math and get started. Let’s roll up our collective sleeves and get started.