This week, I was excited to finally get Captivate installed so I could begin working hands-on. While I had completed the Adobe Captivate course on LinkedIn Learning the previous week, I quickly realized that real-world usage introduces challenges that tutorials don’t always cover. For example, I had envisioned giving users the ability to select from a bank of quiz questions and customize the number of questions they wanted to address—similar to what platforms like MeasureUp provide. Unfortunately, Captivate doesn’t seem to support dynamic quiz generation or custom question counts. I also explored whether Articulate 360 could handle this, but it appears to have the same limitation.
The biggest obstacle so far has been dealing with Japanese character rendering issues. When typing into Captivate’s question slide fields, I encountered mojibake—garbled or corrupted characters. As a workaround, I’ve started composing questions in an external editor and copy-pasting them into Captivate, but with a goal of 380 questions, that’s hardly scalable. I’m now investigating the "Import questions as CSV" feature and will test that soon. If it doesn’t work as hoped, I may need to revise my project deliverables.
While wrestling with these issues, I found myself connecting strongly to this week’s readings—particularly Chapter 10 on engagement. Clark and Mayer (2024) emphasize that true learning comes from psychological engagement, not just clicking buttons or progressing through screens. Although my module is primarily an assessment tool, it still needs to be thoughtfully designed to keep learners mentally involved. If learners are just skimming through dozens of questions with little feedback or variation, the psychological engagement may be minimal. This is motivating me to think more deeply about how to craft richer feedback or scaffold questions that prompt reflection.
Chapter 11’s focus on example-based instruction also gave me pause. While I originally wasn’t planning to include worked examples—since the JLPT is purely assessment-driven—I now see potential value in including a few sample questions with explanations before the actual assessment begins. These could model the kind of thinking test-takers should engage in, much like Clark and Mayer (2024) suggest. For example, a sample grammar item could walk the learner through why a specific verb conjugation is correct. This aligns with the worked example effect and may enhance transfer of learning, especially for trickier question types.
Lastly, Chapter 12’s emphasis on practice reinforced something I already suspected: my module can’t just be a giant quiz. It needs to offer learners a sense of progress and provide meaningful feedback. Clark and Mayer (2024) outline several principles of effective practice—such as making it job-relevant, including sufficient opportunities for practice, and providing feedback. These resonate with the kind of immersive, supportive environment I originally hoped to build. While I’m constrained by the tools, I’m now thinking more about how to offer scaffolded challenge by grouping questions by topic and difficulty rather than dumping all 380 into a single linear flow.
Overall, while I’m hitting some frustrating roadblocks with Captivate—particularly around Japanese text handling and quiz flexibility—the readings this week helped me re-center on what matters most: engagement, support, and thoughtful design. I may not be able to create the full vision I had in mind, but I can still make a resource that is useful, motivating, and aligned with evidence-based learning principles.
Reference:
Clark, R. C., & Mayer, R. E. (2024). e-Learning and the science of instruction (5th ed.). John Wiley & Sons.