In Data and Analytics for Instructional Designers, instructional design expert Megan Torrance provides a practical and accessible guide for instructional designers on developing learning experiences that leverage data collection and analysis to evaluate and improve workplace learning and development and align better with business goals.
Until now, there was an unaddressed need for data and analytics in workplace learning and development. Most books, resources, and degrees on data and analytics tend to focus on marketing, sales, and operational aspects of business, where data is easily accessible. In such areas, business professionals establish metrics and benchmarks and look to data for insights, information, and performance indicators.
Also Read: data.ai Launches Total App Revenue – The Ultimate Mobile Metric
In the L&D space, much of the interest to date has been in academic student analytics. The result has been the growth of MOOCs. But again, this does not translate to the workplace learning environment. There are some excellent books about learning measurement in the corporate space. However, missing across these resources is a focus on the unique data attainable in corporate learning at a granular level, as well as specific direction for instructional design teams on generating this data to feed downstream uses.
A reason behind this is the seeming lack of data in L&D compared to other business functions—thus a tendency not to use data to drive decisions. In most organizations, finance, sales, and operations all have granular data available within a few clicks to drive their decision-making. In L&D, there is often training completion data: Did learners complete the training? When? How long did it take? What are the test scores? Did they like it? Are they motivated to apply it?
Torrance explains, “We need to take advantage of more data in L&D. We tend not to have good insight into the learning experience itself. For example, what did learners click on? What did they do in class? How many times did they practice? Who gave them feedback along the way? Nor do we have good insight into what happens after the learning event. What outside sources did learners use to fill in any remaining gaps in their knowledge?
SOURCE: PRWeb