Tuesday, August 19, 2014


Training Initiatives in the Workplace:
The Importance of Utilizing Business Analytics to Evaluate Organizational Impact 

Stephanie Larsen
Bellevue University 


Leaders of organizations are trying to better understand if and how the training and development program dollars they expend each year are impacting the bottom line of their organizations. Research shows that leadership is willing to spend the money on these programs, but it also shows that their investments lack data to justify their worth. In fact, the American Society for Training and Development (ASTD) found that only three percent of all training courses are evaluated for business impact (Spitzer, 2005). Therefore, the purpose of this paper is to explore the importance of incorporating business analytics into evaluative methods when measuring the effectiveness of training initiatives. Further, this paper will discuss the challenges training professionals face when attempting to implement these evaluation methods. Finally, this paper will identify training evaluation models for both small organizational structures and medium to large organizational structures that rely heavily on business analytics to ultimately better align training initiatives with business objectives.

Mondore, Douthitt, and Carson (2011) defined analytics as “demonstrating the direct impact of people data on important business outcomes” (p. 21). For the purpose of this paper, the term metrics is defined as tangible data that is easy to obtain and measure, but is of low value to overall organizational impact and strategy. Conversely, the term business analytics is defined as intangible data that is difficult to obtain and measure, but is of high value to overall organizational impact and strategy. Analytics is what shows the actual cause and effect relationship among a training activity, the business outcomes, and building a strategy based on that information (Mondore et al., 2011). The term training professional is defined as any practitioner in the field of learning and development, onboarding, training, and organizational development. Mondore et al. (2011), assert that using analytics to evaluate training programs affords “…leaders an opportunity to show the direct impact of their processes and initiatives on business outcomes” (p. 21). Mondore et al. also found that this kind of measurement gives CEO-level leadership valuable insight into return on investment (ROI). This insight ultimately leads to better decision making abilities in the future. Furthermore, training professionals can better advocate for continued support and funding for programs and initiatives that are working and cut the ones that are not working (Mondore et al., 2011).

Background 

Griffin (2012) reviewed several reasons why training professionals should better understand how to most effectively evaluate their organization’s investments in training initiatives. Griffin’s extensive review of literature found that training professionals would be able to better justify resource allocation for these programs if they could properly evaluate those programs. Further, Griffin emphasized that better evaluation will hold them accountable for identifying ways of improving these programs year after year. Moreover, the author claimed that better evaluation enables better decision making pertaining to identifying the opportunity cost of investing in one training initiative over another. Lastly, Griffin warned that the most negative side effect of not implementing better evaluation methods could be a workplace that does not understand the impact of learning investments and therefore trainers are not taken seriously by leadership or employees. Davenport (2006) provided examples of large, profitable organizations using complex data and statistical analysis to rise above their competitors in overall company-wide success.  Davenport argued that industrial-strength business analytics across many activities within an organization (including training activities) is what sets organizations like Amazon, Harrah's, Capital One, and the Boston Red Sox apart.

However, even with all the supporting evidence showing the immense benefits of incorporating business analytics when evaluating training programs, the majority of organizations are still not utilizing this type of evaluative method regarding training. Mondore et al. (2011) found that the definition and process details of using business analytics the right way have not been fully articulated in a consistent way across organizational development literature. Spitzer (2005) argued that it should be distressing for training professionals and organizations to know that only three percent of all training courses are evaluated for business impact. These programs are being measured through happiness indexes rather than bottom line, ROI results. Spitzer also claimed that even the training professionals that are advocating for results-oriented measurement tend to fall back on low-level, easy-to-measure metrics like attendance and participant satisfaction, rather than doing the hard work to uncover true analytics that will impact business objectives.

Similarly, Baker, Chow, Woodford, and Maes (2012) found that training programs are often failures within organizations because they are unable to establish a positive link between training and the organization’s desired change. Baker et al. (2012) found that companies are continuing to spend billions of dollars on training programs, but many are still falling short of their intended goals or at least having the ability to measure whether or not they have met their intended goals. Kapoor (2010) made the argument that organizations have been implementing analytics in other areas such as financial management and forecasting, but that they have not yet taken advantage of the benefits that analytics has to offer in training and organizational development. Most concerning of all is the assertion by Spitzer and Conway (2002) that the survival of training programs are being threatened, stating “…the crisis is the failure to show that investing in training and employee development produces demonstrable business results” (Spitzer & Conway, 2002, p. 55).

Solution

The research shows strong arguments in favor of better aligning training initiatives with bottom line business objectives. Researchers and experts in the field of organizational development and training favor utilizing business analytics framework to more effectively evaluate training and development programs in the workplace. However, the research proves that the majority of organizations have still not figured out a best practice for effectively implementing analytics into training evaluation. This is not to say that evaluation methods do not exist as potential solutions to this problem. The next portion of this paper will introduce and explain two possible training evaluation models.  The first model could be easily implemented in a smaller organization, while the second model is intended for a medium to large organization. Both training evaluation models emphasize the use of business analytics.

Small Organizational Model 

The Kirkpatrick Four Levels is a training evaluation model that has been an industry standard for more than 50 years. The levels are meant to guide training professionals through different metrics that should be evaluated or measured in any training initiative (See Figure 1).  However, the levels are not meant to be indicative of order or consequential steps in any way. This model requires that practitioners start with level four and work their way back to level one prior to developing or delivering any kind of training program.



Kirkpatrick Partners state that the intended use of their model has always been for practitioners to begin implementing the steps prior to executing any training initiative. According to Kirkpatrick and Kayser-Kirkpatrick (2014), it is nearly impossible to create significant training value if the four levels are implemented after a training program has already been delivered. Although this method has been widely recognized in the training field for five decades, Kirkpatrick and Kayser-Kirkpatrick claim that when conducting the Four Levels workshops, training professionals often relate level one to smiley surveys, level two to pre and post-tests, and levels three and four to a ‘hope for the best’ mentality. Consequently, Kirkpatrick Partners developed the Kirkpatrick Business Partnership model (See Figure 2), which is intended to help eliminate a ‘hope for the best mentality’ and provide training professionals with a step by step executable framework.



As detailed in Figure 2, there are many moving pieces and several steps that must be taken before a training professional begins to evaluate a training event using The Kirkpatrick Four Levels model. The initial steps involve data collection around identification of business needs, which could also be described as collecting business analytics. The next step is creating value or buy-in with key leadership by asking them as many questions as possible about their vision of success for that particular training initiative. The answers to those questions become the targeted outcomes.

Next, the training professional must form business partnerships throughout the organization with key players, like supervisors. According to Kirkpatrick and Kayser-Kirkpatrick (2014), “Learning professionals need assistance and partnership from the business for training to deliver the results it is expected to yield” (p. 4). Furthermore, the authors note that training events alone do not provide substantial positive, bottom line impact to a business. This claim is backed by Garr and Loew (2011), which found that only 10 percent of learning takes place during a training event. The authors assert that 70 percent of learning takes place on the job and the other 20 percent is in the form of coaching and mentoring with immediate supervisors. Kirkpatrick and Kayser-Kirkpatrick assert, “At this point, an organization has the opportunity to move from consumptive training metrics to impact metrics” (Kirkpatrick & Kayser-Kirkpatrick, 2014, p. 9).

Medium to Large Organizational Model

The model that best fits into a medium to large organizational structure was created by Mondore et al. (2011). This model is a comprehensive outline with detailed steps on effectively implementing a training initiative that aligns with the organization’s overall objectives, can be measured, and impacts the bottom line. When compared to The Kirkpatrick Four Levels, the Mondore et al. Business Partner Roadmap (see Figure 3) is more comprehensive in nature and would be not as easy to implement in a smaller organization due to a lack of available personnel and resources. This model could be more easily executed in a medium to large organization with several departments with more personnel and resources with differing areas of expertise. Effective execution would involve several individuals from various cross-functional teams throughout an organization.



The Mondore et al. Business Partnership Roadmap emphasizes that an organization must first determine the top two to three most critical outcomes on which to focus. They suggest interviewing key leadership before finalizing this step to not only get advice on critical outcomes, but also to get key leadership buy-in. The next step is to create a cross-functional data team. This step involves identifying a team consisting of measurement experts, key metric owners, and organizational development leadership. This team determines data requirements, links the necessary datasets, and conducts analyses. Then, the training team is charged with assessing critical outcomes. This step is intended to determine how an organization currently stores their data, like frequency of measurement (e.g., monthly, quarterly, annually), level of measurement (e.g., by line of business, by work unit, by manager, at the store level, at the department/function level), and organizational owners (e.g., the department or leader of the particular measurement). Mondore et al. assert that these measurement characteristics must be understood before any linkages to employee data can be made. The authors state, “The goal is to have apples-to-apples comparisons of the data, which means that if you want to look at productivity numbers, you need to have productivity data that is measured at the same interval (e.g., monthly) and at the same level for each manager” (Mondore et al., 2011, p. 23).

Next, the team must build a list of priorities that have data and analysis behind each, which is intended to ensure impact on the organization’s bottom line objectives. Moreover, this step could potentially show which training events are not having the desired outcomes. Actual training event planning is to take place in the next step. This step is intended to show leadership that the training initiative is a deliberate attempt to focus on those employee processes/skills/attitudes/demographics that have been shown to have a direct impact on the organization’s desired bottom line outcomes. Finally, the team must measure and adjust/re-prioritize. This step is an ongoing effort to re-measure, to assess progress, and calculate actual return on investment. Mondore et al. suggest picking two to three priorities, building action plans around those, measuring progress against those plans two to three more times, and then re-calculating the linkages and re-prioritizing.

Conclusion 

This paper identified two possible models that serve as potential solutions to the struggle that many training professionals face, which is to better align training initiatives with overall business objectives to gain desired business impact. Both of the aforementioned models heavily rely on the collection and use of business analytics to effectively implement and evaluate these training events from beginning to end. The first model could be easily implemented in a smaller organizational structure, while the second model was intended for a medium to large organizational structure.

In summary, leaders of organizations are trying to better understand if and how the training program dollars they expend each year are impacting the bottom line of the organization. Research shows that leadership is willing to spend the money on these programs, but it also shows that their investments lack data to justify their worth. The purpose of this paper was to explore the importance of incorporating business analytics into evaluative methods when measuring the effectiveness of training initiatives. Further, this paper discussed the challenges training professionals face when attempting to implement these evaluation methods. Finally, this paper identified training evaluation models for both small organizational structures and medium to large organizational structures that rely heavily on business analytics to ultimately better align training initiatives with business objectives.

 References

Baker, F., Chow, A., Woodford, K., & Maes, J. D. (2012). Applying continuous improvement techniques   to instructional design technology (IDT) for greater organizational effectiveness. Organization Development Journal, 30(1), 53-62. Retrieved from http://ezproxy.bellevue.edu:80/login?url=http://search.proquest.com/docview/963777467?accountid=28125

Garr, S. & Loew, L.B. (2011). High-Impact Leadership Development: Best Practices for Building 21st Century Leaders. Bersin & Associates. Retrieved from http://www.bersin.com/News/Content.aspx?id=15596

Griffin, R. (2012). A practitioner friendly and scientifically robust training evaluation approach. Journal of Workplace Learning, 24(6), 393-402. doi:10.1108/13665621211250298

Davenport, T. H. (2006, 01). Competing on analytics. Harvard Business Review, 84, 98-107. Retrieved from http://ezproxy.bellevue.edu:80/login?url=http://search.proquest.com/docview/227818006?accountid=28125

Kapoor, B. (2010). Business intelligence and its use for human resource management. The Journal of Human Resource and Adult Learning, 6(2), 21-30. Retrieved from http://ezproxy.bellevue.edu:80/login?url=http://search.proquest.com/docview/867401292?accountid=28125

Kirkpatrick, J. & Kayser-Kirkpatrick, W. (2014). . [White Paper]. Retrieved from http://www.kirkpatrickpartners.com/Portals/0/Resources/Kirkpatrick%20Four%20Levels%20white%20paper.pdf

Mondore, S, Douthitt, S., & Carson, M. (2011). Maximizing the impact and effectiveness of HR analytics to drive business outcomes. Strategic Management Decisions (34, 2, 21-27)

Spitzer, D. R. (2005). Learning effectiveness measurement: A new approach for measuring and managing learning to achieve business results. Advances in Developing Human Resources, 7(1), 55-70. Retrieved from http://ezproxy.bellevue.edu:80/login?url=http://search.proquest.com/docview/221135772?accountid=28125

Spitzer, D. R., & Conway, M. (2002). Link training to your bottom line. Alexandria, VA: American Society for Training and Development.

Tuesday, March 5, 2013

MMC640: Final Thoughts


When I began this class 12 weeks ago, I was feeling overwhelmed and under accomplished.  I was having a hard time seeing the light at the end of the tunnel, so to speak.  The good news is, as I come to a close on my 18th credit of this 36 credit hour master’s program, I am feeling energized and I can see the light! 

I enjoyed this class and feel like I did accomplish all the major course objectives one way or another.  In the beginning I was concerned with all the multimedia and my ability to find the time to do good work, however; I found that using the multimedia to make the commercial was my favorite part of the class!  I found Photo Story 3 incredibly user friendly, and I would use it again for both personal and professional projects. 

I was glad that the final paper was completed in milestones, however; I didn’t use the milestones to my benefit as much as I wish I would have.  I underestimated the amount of resources I ultimately came across when researching my topic, which changed my initial ideas of how my paper was going to flow.  Because I didn’t account for that extra time, I ended up not turning in a rough draft to the writing center, which is something I normally do, and I know that hurt my grade.  My paper ended up sounding a little too opinionated rather than strictly based on formal research.

Even though my paper didn’t end up exactly the way I would have liked, I leave this class with lessons learned and a reinvigorated feeling of accomplishment and motivation to continue forward.

-S

Tuesday, January 8, 2013

Facebook: Documenting Every Unethical Decision Since You Were Too Young to Care

I started thinking about integrity and all the electronic forms of communication we engage in today, be it informal Facebook messaging or "formal" work emails.  So many successful, professional adults have made some really serious mistakes using electronic forms of communication.  These mistakes can show lapses in judgment and even lack of integrity.  When this happens, the general public is ready to be the judge and the jury.  We want to know all the dirt and some even go to unethical extents, even violating privacy, to obtain the scandalous details.  

Our society has become obsessed with these scandalous stories, almost seeking them out at times.  But when we find them, we are so very judgmental and almost believe that we were entitled to that information.  The truth is, when people choose to be in the public spotlight they are taking the risk that their personal life and decisions could be become public.  Also, as a society we hold those we elect to public office to a higher moral standard.  I get that, but my point is people make mistakes in their personal life and that doesn't unequivocally mean they aren't good at their job.  Does it?

For example, former CIA Director David Patraeus resigned after email messages were discovered that showed he was having an extra-marital affair.  Patraeus was a highly decorated four-star general who served more than 37 years in the United States Army.  Numerous senators, governors, and congressman have been caught in email and texting scandals as well.  So have school principals and other figures who we expect will not make the same mistakes we do.  Here is some food for thought.  These people made these decisions when they were adults, with a professional career, and knew that scandals could potentially damage their careers.  Can you imagine if Facebook had been around when these people were in college?

Well, I am 26-years-old and Facebook started when I was in college.  What is going to happen when people my age begin running for public office or become elementary school principals or run for President?  We have been essentially documenting our lives from a very young age, from an age when we were not thinking about running for public office someday, from an age when we didn’t think a whole lot about the consequences of our actions.  Can you imagine the kinds of things that will be exposed about people when my generation begins stepping into these high visibility professions? We might think twice about how judgmental we are or how important privacy is.  How is my generation going to be affected by our Facebook documentaries...every bad decision and questionable photo?

S

Tuesday, December 18, 2012

Ethics & Decision Making: Self Assessment

Will I ever not be in school??!?  As I take on my sixth class of 12 in my master's program, I find myself feeling less than motivated and positive.  Why don't I have the glass half full attitude?  I'm almost half way done (I try to tell myself).  Instead, I feel like how am I ONLY half way done??  I'm 26 and I have been a student my whole life with no more than a few months off.  Anyone else feeling overwhelmed or having a hard time seeing the light at the end of the tunnel?

On a positive note, so far  I like this class a lot more than the last class I took in the MMC program.  Especially after looking through the course objectives and syllabus, I feel this class will match interests and abilities much more than the last class I took did.  I find myself more motivated I read through the final projects and course objectives.

I also find the all the little extras in this course really stimulating, like the video overviews and the thoughts/images for the week.   

I think making the commercial will be fun, but I am also a little concerned about the amount of time it will take. 

I love that the final paper is completed in milestones.  I feel like those kinds of deadlines are much truer to a real world work environment.  I'm worried about choosing a topic for the final paper and final commercial.  Because of that, I will start to consider topics now and try to avoid the panic at the last minute (as I sometimes do with final papers/projects).

Well, onward I go.  I feel like Nemo in Finding Nemo sometimes...just keep swimming, just keep swimming...  

S



\