9

Updating ADDIE: Introducing the “ADDEDD Model” · For years, the ADDIE model of instructional design has guided development of learning programs, doing what a good framework does:

  • Upload
    vongoc

  • View
    228

  • Download
    0

Embed Size (px)

Citation preview

©2018, Aquinas Learning, Inc. 2

Updating ADDIE: Introducing the “ADDEDD Model”

The world of learning is changing fast. In recent years, we’ve seen the emergence of a host of new technologies that promise to dramatically improve our ability to deliver learning outcomes. In wave after wave, augmented reality, chatbots, virtual reality, xAPI, blockchain and more are both providing new options, and complicating the planning process. In conversations with instructional designers, L&D teams and technologists, it has become clear that ADDIE needed an update, and this paper is an attempt to do just that. Frameworks An important part of doing anything well is the development of frameworks. These tools solidify experience, codifying the patterns that show up in every project, so we can focus our attention on what is different in each project. When these patterns change, it is time to re-look at our frameworks. New learning technologies, from mobile to voice assistants, to Virtual, Augmented, and Mixed Realities, have all begun maturing to the point where the learning industry must learn to use them well – and doing so is resulting on new patterns. These new patterns make old assumptions outmoded. The first such assumption being that designers and specifiers of learning projects understand how these new media types work. As these learning media types grow in number, we face another issue – the instructional designer’s preferences don’t always reflect those of learners, those who must perform the job. In this paper we endeavor to augment these frameworks with some new ideas, empowering the instructional design of learning projects of the future. Old Assumptions For years, the ADDIE model of instructional design has guided development of learning programs, doing what a good framework does: ensuring steps are communicated to partners and clients, understood & followed, and good criteria for program success are developed. Comprised of five steps, the ADDIE model begins with analysis, which leads to design, development, implementation, then finally evaluation.

©2018, Aquinas Learning, Inc. 3

Evaluation can, in fact, be part of the process throughout, as we show in figure 1. And most real development projects have multiple client/partner checkpoints. However, it is also true that most of those checkpoints only involve viewing Powerpoint, or perhaps a storyboard. Truly experiencing the learning media along the way has not always been necessary – everyone understands how a quiz will work, or how a video will be experienced. Relatedly, the relative abstractness of earlier media meant that the subject matter specifics were easily conveyed via text. What we find with newer media, like virtual reality, is that some of the nuance in what is now possible means a subject matter expert is often helpful during learning resource development. It is from the obsolescence of that key assumption that the need for an update to ADDIE emerged – we once assumed everyone involved in learning program development intuitively ‘gets’ how media work, because in the past they did. Everyone you know has been watching video for all of their lives, and reading almost as long. The modalities of learning have been, until now, so well understood as to allow highly abstract development processes like scripts to be enough for much of evaluation. These assumptions, that we really know how to develop learning with the available media, and that our partners can guide & approve our work product intuitively, have been a part of the industry since the dawn of eLearning. And we believe they no longer apply. Whole New Worlds We now have an unprecedented opportunity to utterly change how learning is developed, designed and delivered. Every part of the learning value chain has been improved, expanded and made more sophisticated. We have every reason to believe that this process will continue, with the ongoing improvement of the components of modern learning. Let’s review some of these:

Virtual Reality: Since Facebook’s 2013 purchase of Oculus Rift launched the latest wave of virtual reality, VR has grown in capabilities and attention from the learning industry. VR allows entirely new learning programs to be created, like immersive simulations of everything from oil rigs to employee coaching. At present, most people involved in learning have little or no experience in VR, limiting their ability to brief & assess projects without new approaches to the process. Augmented (Mixed) Reality: Less developed than Virtual Reality, Augmented Reality is moving very quickly, and with companies already employing AR glasses in training. This blending of virtual and real world experiences opens entirely new worlds for the industry to develop – but again no one has much experience, and therefore any level of intuition, with the technology. Voice Interfaces: For years we have had some level of voice input and control, but the advent of sophisticated, effective Natural Language Processing (NLP) has transformed our ability to use voice in learning programs. Far from the clumsy Siri experiences of the past, NLP-powered voice applications can provide intuitive, natural-feeling interfaces. But how do they help? How

©2018, Aquinas Learning, Inc. 4

does voice impact learning, and where should we use it? There is no standard, no commonly understood set of rules. Haptic interfaces: It has been observed that media, like video or even virtual reality, can convey knowledge well, but they do less will in creating physical skills. Whether it is welding or dentistry, to truly learn a skill, the hands often need to be involved. Haptic interfaces are rapidly developing in sophistication, as is our understanding of how to deploy them.

Artificial Intelligence: AI is moving even faster than VR/AR, while at the same time being less a single technology than a set of techniques for empowering other things. AI will make everything from user interfaces to voice to simulations all work better, be more flexible, and most importantly, tie our training to context in a way that was impossible before. We are already seeing AI used in analytics and predictive evaluation.

These five learning modalities are here with us now, and all five are receiving billions of dollars of investment by large companies, such as Apple, Google, Microsoft and others, and have spawned multiple, overlapping startup ecosystems around the world. We will find more and more opportunities to leverage these technologies, both individually and in combination. To take advantage of these new opportunities, we need a new approach. Whereas old assumptions still work in the older, SCORM-centric context of written words, tests and delivered videos, they are harder to apply when we simply don’t know how to best design, develop and deploy our media. Towards an Agile framework It turns out that this problem of not knowing how things will work until you’ve tried them is not new, and has been faced by both the software and the creative industries. We can learn from approaches both have taken: Agile development and prototyping, respectively. Agile development is sometimes misunderstood in the learning industry – it doesn’t just refer to being flexible and “agile” in the normal sense of the word. Agile refers to a specific approach and methodology for software development that emphasizes delivering product for review fast and often. The point is to avoid doing all the work in the designer’s head, but get the work in front of real people as early and often as possible. It was developed in response to the creeping complexity of software development, and the very real problem of designers iterating the project away from real requirements. Similarly, IDEO, developers of design thinking, have long championed the use of prototypes in creative development. This is important because it forces abstract, half-formed ideas to be solidified, both literally and figuratively, into something tangible. One challenge is that many cannot visualize the finished product based on storyboards – they need an example or visuals to truly understand the learning program end product. This challenge can be amplified when adding emerging technologies which introduce a whole new set of variables to the mix. It is this tangibility that is central for our new approach to learning program development, because the engagement of the senses this allows reveals both problems and opportunities that are hidden otherwise.

©2018, Aquinas Learning, Inc. 5

We are seeing leading practitioners, like Megan Torrance, chief energy officer of Torrance Learning, leverage true agile approaches quite successfully for their customers through LLAMA, their unique approach to agile. We cannot know if a virtual simulation is too confusing if we’re not in it. We cannot know if the computer generated agent in our mixed reality experience is distracting if we don’t build and try it. We cannot know if the machine learning – enabled user interface moves to quickly if we don’t put it in front of real people. This need to iterate with real things is made all the more relevant because these new technologies are not as mature as video or pdfs, and as such mistakes made early can often cost much more to fix. Introducing ADDEDD, a new approach to learning program development In conferences and discussions with clients and partners, we at Aquinas have heard over and again the dissatisfactions with ADDIE. In interviews with designers and developers of virtual reality, augmented reality and blended learning, we identified key areas for improvement, most of which were solved by two parts of the process: “experience,” and deployment. As we discuss below, these two are uniquely relevant to modern learning design. So, developed the model below, a blend of what still works from ADDIE, and the newer imperative to prototype and adopt an agile approach:

We begin with “Analyze”. ADDIE had that right – we first need to chart the overall course. This takes the form of a series of questions about what is to be learned, by whom, how, and what do they know now. We need to understand the desired performance and tasks involved in the performance as well as the business needs that support the project.

The next step, “Determine,” reflects the growing palette of media available to today’s designers, a palette that will only grow with time as technologies mature. This is a framework of its own, which we discuss below, that looks at the tradeoffs for different technological options in light of learning objectives. Any such framework will be a beginning, as the underlying tradeoffs will changes different technologies mature and issues are addressed. A good example of this is in virtual reality, where current

©2018, Aquinas Learning, Inc. 6

offerings either allow for interpersonal nuances like facial expressions, but do not allow for interactivity in the case of immersive 360 video, or do allow interactivity, but at the cost of visual and interpersonal nuance, in the case of rendered VR. Unlike video and older eLearning modalities, in most instances, it is crucial that the corporate and design team experience examples of these choices before finalizing them. What might seem good enough in rendered VR might actually miss critical learning goals, for example. In other instances, the loss of rich interactivity might be too high a price to pay for the nuance of 360 video. The ”Design” step is similar to ADDIE, in that the experience and sequences are developed. What is new is the need to get into the experiences as fast as is feasible, before the details have been worked out. In this phase, we support rapid prototype methods such as SAM, or Successive Approximation Model. This method is more of a design-develop-evaluate process that enables the team to view and provide feedback throughout the process rather than waiting to see a finished product. Iterations of “Experience” must happen throughout all the processes since most are new to emerging technologies. Generic examples should be provided at the beginning of the project. Wireframes can be provided after design to validate the work product before development. And during development, additional validations as the project is shaped will be necessary.For the foreseeable future, we will not be able to design or assess learning programs without being in them. Experience is the sine qua non of learning experience development, in a way that was not true in the past. Once design has been approved, we must “Develop” the project, and here again the new technological options on offer create both opportunities and challenges. When media are new, there is a need to isolate their development, as we often don’t quite know the process, and as a result aren’t able to integrate that development with broader learning program development. Typically, we’re running pilots to understand the reality of what the experiences is like for learners, amongst other variables. But we need to be integrating these technologies as fast as possible, and here is the development challenge, and why we’ve separated this into its own step. As development proceeds, the prototyping and iterating that began in the design phase continues, but the focus shifts from what to create to how to implement what was designed. Finally, we must “Deploy” our program, and here again the relative immaturity of technologies means that successful deployment cannot be assumed. It often takes a few rounds of in-field testing and user experience to get it just right. While it might seem that this issue will resolve itself, it is apparent from the list above, and other technologies not quite on our radar yet, that we as learning program developers are going to have a long stream of new products and technologies to consider as development continues and accelerates. Deployment, includes reviewing the data – both the learner analytics and the program specifics. What did we learn from the project that can be applied to the next project to streamline the process and reduce costs? Did we achieve our learning objectives and goals? Did it meet/exceed the performance needs?

©2018, Aquinas Learning, Inc. 7

Our Stages, in Greater Depth 1. Analyze.

The basics of which competencies need to be built, who needs to acquire them, and how they will learn have not changed, and of all the steps in the ADDEDD model, this one is the most like its “ADDIE” equivalent. What has changed is a list of additional questions that can open up new possibilities for learning:

a. What learning goals have not been accomplished in the past? b. Do these learning objectives include physical skills? c. How important is analytics?

What improvements in behavior can be enhanced by using emerging technologies? Would a virtual world experience help in situations where a lack of access to tools or processes is not possible in the real world?

2. Determine.

The outcomes of the “Analyze” step should lead to a clear understanding of what we need learners to experience. It is time to consider which of the expanding list of technologies are best to achieve these goals, and despite the focus on new technologies that led to the development of the ADDEDD model, it is worth emphasizing that “video and a 2-page PDF” remains a solid answer. We must know what is possible with emerging technology and how to make the right choice of methods and media. If we have the ability to create using emerging technology but the current state of our IT infrastructure will not support it, do we abandon the project? How does the project align with the company’s vision and mission – are there business strategies in place or forecasted to support the emerging technology effort? We include here an example of such tradeoffs, focused on virtual reality.

©2018, Aquinas Learning, Inc. 8

3. Design. The design phase will be driven by the same questions in ADDIE, with some additional complexity given the additional media that are often used. These questions include:

• What do we need to happen? • What sequence is necessary? • What do each of our learning objectives really require? • Who should be there? • What should they do? • Where should they be? • What interactions are necessary? • What tracking do we need? • What feedback do they need?

4. Experience.

To fully understand what we’re creating, we need to get into a prototype early and often. For many in corporate roles, or accustomed to servicing clients in corporate roles, the practice of prototyping isn’t always a comfortable one. The key is to give everyone involved access to multi-sensory intuitions, and often this means starting with examples from other work – like a VR experience from a game, or journalism, to illustrate the point. Then we will often create a first VR experience that is very rough, allowing the team to make assessments and key improvements early, and avoid surprises as the project moves along.

Again using VR as an example, we separate the experience process into three stages of prototypes, more to illustrate how these prototypes change over time.

5. Develop.

The develop stage is begins to get very context-specific, and we suggest leading with some key questions:

a. How will this experience integrate with our courses overall? b. How will we bring learners into the experience? c. How will they access it? What equipment is needed? d. How are we tracking this? Have we implemented xAPI, CMI5 or other standards?

©2018, Aquinas Learning, Inc. 9

e. Have we mapped out what we want to track, and what rubrics will matter? f. Will we optimize this over time? What is the process for that? g. How automated should this be? h. How social should this be? i. Will we include ability to practice without management’s review? How about journaling

during the experience? j. More broadly, how are we treating privacy?

This learning can be a blended approach that includes an introductory eLearning module, a facilitated virtual session, a non-facilitated virtual session (which could include an assessment), and follow up one-on-coaching. Metrics and analytics can be collected and reviewed at each step.

6. Deploy.

With all of these new technologies, it is rarely the case that the existing infrastructure is ready to support their deployment. We call this out separately because final deployment is such a critical variable and one that we’ve repeatedly seen cost way too much to address after the fact. Here is a simple checklist that illustrates how teams should think about deployment:

A new model for a new era As an industry, we know a great deal more about how people learn than we did in decades past. At the same time, those decades have been spent accumulating intuitions about what works and what doesn’t. Those intuitions are priceless, and it is imperative that learning programs be designed with a process that leverages the experience and judgment of instructional designers, eLearning managers and L&D professionals generally. To do this, we have laid out a simple, six step process that focuses on early prototyping and personal experience, and acknowledges that we may never again be in a position where the delivered learning experience can be understood in the abstract – we are likely to see accelerating development of technologies that require us to test over and over, and see for ourselves at each step in the development process. This framework helps us quickly evaluate what is needed, use rapid prototype methods to accelerate the process and provides a constant feedback loop for improvement as the technology needs shift. We present ADDEDD as a model for that development, and look forward to creating further materials for the community to leverage it for your own projects.