Training Initiatives in the Workplace:
The Importance of Utilizing Business Analytics to Evaluate Organizational Impact
Stephanie Larsen
Bellevue University
Leaders of organizations are trying to better understand if and how the training and development program dollars they expend each year are impacting the bottom line of their organizations. Research shows that leadership is willing to spend the money on these programs, but it also shows that their investments lack data to justify their worth. In fact, the American Society for Training and Development (ASTD) found that only three percent of all training courses are evaluated for business impact (Spitzer, 2005). Therefore, the purpose of this paper is to explore the importance of incorporating business analytics into evaluative methods when measuring the effectiveness of training initiatives. Further, this paper will discuss the challenges training professionals face when attempting to implement these evaluation methods. Finally, this paper will identify training evaluation models for both small organizational structures and medium to large organizational structures that rely heavily on business analytics to ultimately better align training initiatives with business objectives.
Mondore, Douthitt, and Carson (2011) defined analytics as “demonstrating the direct impact of people data on important business outcomes” (p. 21). For the purpose of this paper, the term metrics is defined as tangible data that is easy to obtain and measure, but is of low value to overall organizational impact and strategy. Conversely, the term business analytics is defined as intangible data that is difficult to obtain and measure, but is of high value to overall organizational impact and strategy. Analytics is what shows the actual cause and effect relationship among a training activity, the business outcomes, and building a strategy based on that information (Mondore et al., 2011). The term training professional is defined as any practitioner in the field of learning and development, onboarding, training, and organizational development. Mondore et al. (2011), assert that using analytics to evaluate training programs affords “…leaders an opportunity to show the direct impact of their processes and initiatives on business outcomes” (p. 21). Mondore et al. also found that this kind of measurement gives CEO-level leadership valuable insight into return on investment (ROI). This insight ultimately leads to better decision making abilities in the future. Furthermore, training professionals can better advocate for continued support and funding for programs and initiatives that are working and cut the ones that are not working (Mondore et al., 2011).
Background
Griffin (2012) reviewed several reasons why training professionals should better understand how to most effectively evaluate their organization’s investments in training initiatives. Griffin’s extensive review of literature found that training professionals would be able to better justify resource allocation for these programs if they could properly evaluate those programs. Further, Griffin emphasized that better evaluation will hold them accountable for identifying ways of improving these programs year after year. Moreover, the author claimed that better evaluation enables better decision making pertaining to identifying the opportunity cost of investing in one training initiative over another. Lastly, Griffin warned that the most negative side effect of not implementing better evaluation methods could be a workplace that does not understand the impact of learning investments and therefore trainers are not taken seriously by leadership or employees. Davenport (2006) provided examples of large, profitable organizations using complex data and statistical analysis to rise above their competitors in overall company-wide success. Davenport argued that industrial-strength business analytics across many activities within an organization (including training activities) is what sets organizations like Amazon, Harrah's, Capital One, and the Boston Red Sox apart.
However, even with all the supporting evidence showing the immense benefits of incorporating business analytics when evaluating training programs, the majority of organizations are still not utilizing this type of evaluative method regarding training. Mondore et al. (2011) found that the definition and process details of using business analytics the right way have not been fully articulated in a consistent way across organizational development literature. Spitzer (2005) argued that it should be distressing for training professionals and organizations to know that only three percent of all training courses are evaluated for business impact. These programs are being measured through happiness indexes rather than bottom line, ROI results. Spitzer also claimed that even the training professionals that are advocating for results-oriented measurement tend to fall back on low-level, easy-to-measure metrics like attendance and participant satisfaction, rather than doing the hard work to uncover true analytics that will impact business objectives.
Similarly, Baker, Chow, Woodford, and Maes (2012) found that training programs are often failures within organizations because they are unable to establish a positive link between training and the organization’s desired change. Baker et al. (2012) found that companies are continuing to spend billions of dollars on training programs, but many are still falling short of their intended goals or at least having the ability to measure whether or not they have met their intended goals. Kapoor (2010) made the argument that organizations have been implementing analytics in other areas such as financial management and forecasting, but that they have not yet taken advantage of the benefits that analytics has to offer in training and organizational development. Most concerning of all is the assertion by Spitzer and Conway (2002) that the survival of training programs are being threatened, stating “…the crisis is the failure to show that investing in training and employee development produces demonstrable business results” (Spitzer & Conway, 2002, p. 55).
Solution
The research shows strong arguments in favor of better aligning training initiatives with bottom line business objectives. Researchers and experts in the field of organizational development and training favor utilizing business analytics framework to more effectively evaluate training and development programs in the workplace. However, the research proves that the majority of organizations have still not figured out a best practice for effectively implementing analytics into training evaluation. This is not to say that evaluation methods do not exist as potential solutions to this problem. The next portion of this paper will introduce and explain two possible training evaluation models. The first model could be easily implemented in a smaller organization, while the second model is intended for a medium to large organization. Both training evaluation models emphasize the use of business analytics.
Small Organizational Model
The Kirkpatrick Four Levels is a training evaluation model that has been an industry standard for more than 50 years. The levels are meant to guide training professionals through different metrics that should be evaluated or measured in any training initiative (See Figure 1). However, the levels are not meant to be indicative of order or consequential steps in any way. This model requires that practitioners start with level four and work their way back to level one prior to developing or delivering any kind of training program.
Kirkpatrick Partners state that the intended use of their model has always been for practitioners to begin implementing the steps prior to executing any training initiative. According to Kirkpatrick and Kayser-Kirkpatrick (2014), it is nearly impossible to create significant training value if the four levels are implemented after a training program has already been delivered. Although this method has been widely recognized in the training field for five decades, Kirkpatrick and Kayser-Kirkpatrick claim that when conducting the Four Levels workshops, training professionals often relate level one to smiley surveys, level two to pre and post-tests, and levels three and four to a ‘hope for the best’ mentality. Consequently, Kirkpatrick Partners developed the Kirkpatrick Business Partnership model (See Figure 2), which is intended to help eliminate a ‘hope for the best mentality’ and provide training professionals with a step by step executable framework.
As detailed in Figure 2, there are many moving pieces and several steps that must be taken before a training professional begins to evaluate a training event using The Kirkpatrick Four Levels model. The initial steps involve data collection around identification of business needs, which could also be described as collecting business analytics. The next step is creating value or buy-in with key leadership by asking them as many questions as possible about their vision of success for that particular training initiative. The answers to those questions become the targeted outcomes.
Next, the training professional must form business partnerships throughout the organization with key players, like supervisors. According to Kirkpatrick and Kayser-Kirkpatrick (2014), “Learning professionals need assistance and partnership from the business for training to deliver the results it is expected to yield” (p. 4). Furthermore, the authors note that training events alone do not provide substantial positive, bottom line impact to a business. This claim is backed by Garr and Loew (2011), which found that only 10 percent of learning takes place during a training event. The authors assert that 70 percent of learning takes place on the job and the other 20 percent is in the form of coaching and mentoring with immediate supervisors. Kirkpatrick and Kayser-Kirkpatrick assert, “At this point, an organization has the opportunity to move from consumptive training metrics to impact metrics” (Kirkpatrick & Kayser-Kirkpatrick, 2014, p. 9).
Medium to Large Organizational Model
The model that best fits into a medium to large organizational structure was created by Mondore et al. (2011). This model is a comprehensive outline with detailed steps on effectively implementing a training initiative that aligns with the organization’s overall objectives, can be measured, and impacts the bottom line. When compared to The Kirkpatrick Four Levels, the Mondore et al. Business Partner Roadmap (see Figure 3) is more comprehensive in nature and would be not as easy to implement in a smaller organization due to a lack of available personnel and resources. This model could be more easily executed in a medium to large organization with several departments with more personnel and resources with differing areas of expertise. Effective execution would involve several individuals from various cross-functional teams throughout an organization.
The Mondore et al. Business Partnership Roadmap emphasizes that an organization must first determine the top two to three most critical outcomes on which to focus. They suggest interviewing key leadership before finalizing this step to not only get advice on critical outcomes, but also to get key leadership buy-in. The next step is to create a cross-functional data team. This step involves identifying a team consisting of measurement experts, key metric owners, and organizational development leadership. This team determines data requirements, links the necessary datasets, and conducts analyses. Then, the training team is charged with assessing critical outcomes. This step is intended to determine how an organization currently stores their data, like frequency of measurement (e.g., monthly, quarterly, annually), level of measurement (e.g., by line of business, by work unit, by manager, at the store level, at the department/function level), and organizational owners (e.g., the department or leader of the particular measurement). Mondore et al. assert that these measurement characteristics must be understood before any linkages to employee data can be made. The authors state, “The goal is to have apples-to-apples comparisons of the data, which means that if you want to look at productivity numbers, you need to have productivity data that is measured at the same interval (e.g., monthly) and at the same level for each manager” (Mondore et al., 2011, p. 23).
Next, the team must build a list of priorities that have data and analysis behind each, which is intended to ensure impact on the organization’s bottom line objectives. Moreover, this step could potentially show which training events are not having the desired outcomes. Actual training event planning is to take place in the next step. This step is intended to show leadership that the training initiative is a deliberate attempt to focus on those employee processes/skills/attitudes/demographics that have been shown to have a direct impact on the organization’s desired bottom line outcomes. Finally, the team must measure and adjust/re-prioritize. This step is an ongoing effort to re-measure, to assess progress, and calculate actual return on investment. Mondore et al. suggest picking two to three priorities, building action plans around those, measuring progress against those plans two to three more times, and then re-calculating the linkages and re-prioritizing.
Conclusion
This paper identified two possible models that serve as potential solutions to the struggle that many training professionals face, which is to better align training initiatives with overall business objectives to gain desired business impact. Both of the aforementioned models heavily rely on the collection and use of business analytics to effectively implement and evaluate these training events from beginning to end. The first model could be easily implemented in a smaller organizational structure, while the second model was intended for a medium to large organizational structure.
In summary, leaders of organizations are trying to better understand if and how the training program dollars they expend each year are impacting the bottom line of the organization. Research shows that leadership is willing to spend the money on these programs, but it also shows that their investments lack data to justify their worth. The purpose of this paper was to explore the importance of incorporating business analytics into evaluative methods when measuring the effectiveness of training initiatives. Further, this paper discussed the challenges training professionals face when attempting to implement these evaluation methods. Finally, this paper identified training evaluation models for both small organizational structures and medium to large organizational structures that rely heavily on business analytics to ultimately better align training initiatives with business objectives.
References
Baker, F., Chow, A., Woodford, K., & Maes, J. D. (2012). Applying continuous improvement techniques to instructional design technology (IDT) for greater organizational effectiveness. Organization Development Journal, 30(1), 53-62. Retrieved from http://ezproxy.bellevue.edu:80/login?url=http://search.proquest.com/docview/963777467?accountid=28125
Garr, S. & Loew, L.B. (2011). High-Impact Leadership Development: Best Practices for Building 21st Century Leaders. Bersin & Associates. Retrieved from http://www.bersin.com/News/Content.aspx?id=15596
Griffin, R. (2012). A practitioner friendly and scientifically robust training evaluation approach. Journal of Workplace Learning, 24(6), 393-402. doi:10.1108/13665621211250298
Davenport, T. H. (2006, 01). Competing on analytics. Harvard Business Review, 84, 98-107. Retrieved from http://ezproxy.bellevue.edu:80/login?url=http://search.proquest.com/docview/227818006?accountid=28125
Kapoor, B. (2010). Business intelligence and its use for human resource management. The Journal of Human Resource and Adult Learning, 6(2), 21-30. Retrieved from http://ezproxy.bellevue.edu:80/login?url=http://search.proquest.com/docview/867401292?accountid=28125
Kirkpatrick, J. & Kayser-Kirkpatrick, W. (2014). . [White Paper]. Retrieved from http://www.kirkpatrickpartners.com/Portals/0/Resources/Kirkpatrick%20Four%20Levels%20white%20paper.pdf
Mondore, S, Douthitt, S., & Carson, M. (2011). Maximizing the impact and effectiveness of HR analytics to drive business outcomes. Strategic Management Decisions (34, 2, 21-27)
Spitzer, D. R. (2005). Learning effectiveness measurement: A new approach for measuring and managing learning to achieve business results. Advances in Developing Human Resources, 7(1), 55-70. Retrieved from http://ezproxy.bellevue.edu:80/login?url=http://search.proquest.com/docview/221135772?accountid=28125
Spitzer, D. R., & Conway, M. (2002). Link training to your bottom line. Alexandria, VA: American Society for Training and Development.
No comments:
Post a Comment