This week, I continued working on my JLPT N3 grammar and vocabulary assessment module in Adobe Captivate, but more importantly, I began to shift my mindset from building around technical constraints to designing with intention—something heavily emphasized in Chapters 13 through 16 of e-Learning and the Science of Instruction.
Last week, I had been troubleshooting character corruption issues when importing Japanese quiz content into Captivate. After many false starts (including installing the Japanese version of the application), I discovered the core problem was not Captivate at all, but how I was saving my files. By exporting my CSV files with UTF-8 encoding, I resolved the mojibake and can now import Japanese questions and answers successfully. This discovery was huge—it saved my project from a possible overhaul. But even more importantly, it gave me room to refocus on the learning experience itself.
In line with Chapter 13's Segmenting and Pretraining Principles, I realized that my original plan to include nearly 400 questions was not only overwhelming to build, but also cognitively overwhelming for the learner. Inspired by the segmenting principle, I’ve now reduced my total questions to about 100 and grouped them into a bucket system, allowing users to be presented with 10 randomized questions at a time. This approach aligns more clearly with the pretraining principle as well: I’m planning to add a short orientation screen before the quiz begins, offering examples and brief grammar hints to better scaffold the user experience.
Chapters 14 and 15 also influenced how I’m thinking about the learner’s path through the module. Chapter 14's insights into learner control reminded me to strike a balance between guided structure and flexible navigation. While Captivate doesn’t offer a robust quiz customization engine, I can still provide the user with some sense of control—perhaps by offering thematic categories (grammar vs. vocabulary) to choose from before they begin the randomized quiz sequence.
Chapter 15 on personalization also had a surprising impact on me. Even though this is a quiz-based module, I’ve started rewriting my on-screen instructions and feedback using a more conversational tone. For example, instead of saying “Incorrect. The correct answer is X,” I’m experimenting with “Almost! Let’s take a look at why X works better here.” It’s a small change, but Clark and Mayer (2024) make a strong case that conversational voice can foster a sense of connection, even in self-paced asynchronous modules.
Finally, Chapter 16 on collaborative learning offered broader perspective. While collaboration isn’t directly built into my current module, it made me think about future iterations—such as building a shared quiz review or vocabulary competition element that could support asynchronous peer interaction. For now, though, it’s enough to be mindful that learning doesn’t happen in a vacuum, and to design content that’s engaging, self-reflective, and shareable.
Looking back, I’m thankful for the setbacks I encountered early on—they pushed me to think more deeply about how I was applying instructional design principles. I’ve gone from trying to recreate the JLPT to building something far more thoughtful: a quiz experience that’s segmented, personalized, cognitively supportive, and ultimately more in line with sound e-learning design.
Reference:
Clark, R. C., & Mayer, R. E. (2024). e-Learning and the science of instruction (5th ed.). John Wiley & Sons.