Educator shortages in K-12 instruction persist despite local, state, and federal initiatives aimed at increasing staffing and capacity. One potential solution to the educator crisis is assessing and upskilling paraprofessionals to support their advancement to full licensure as educators. 

At WGU Labs, we recently completed a pilot of a prototype simulation tool designed to measure teaching skills more efficiently. Many paraprofessionals bring years of hands-on classroom experience, but traditional assessments often fail to capture what they already know how to do. Our goal was to explore whether AI-powered simulations could help these experienced practitioners demonstrate competency in ways that are both rigorous and respectful of their time, potentially accelerating their path to full licensure.

But this pilot revealed more than just proof of concept. It surfaced key takeaways for higher education leaders rethinking how they credential educators, as well as for K-12 administrators seeking faster and more reliable pipelines to fill teaching roles.

This blog outlines what we learned, its implications for instructional design in the AI era, and why we believe every institution should consider building its own tools.

Real-World Practice for Real-World Problems: The Case for Skill Building and Validation

A significant gap exists in educator preparation and professional development: a limited availability of objective, performance-based methods for validating essential teaching skills, particularly UDL competencies and the critical "soft skills" required in the classroom.

While UDL is a research-based framework that guides inclusive learning design, research indicates that relying solely on self-reported data from surveys can limit objectivity. We needed a tool that could generate and analyze real-world performance data to validate the application of UDL in practice—moving beyond knowledge about UDL to demonstrate skill mastery.

Similarly, while this tool was primarily designed for pre-service teachers to build new skills, it also serves as a vital tool to validate the existing, high-leverage skills of experienced learners, such as paraprofessionals. Traditional assessments for paraprofessionals primarily test foundational academic knowledge, failing to measure essential competencies like empathy, communication, and cultural competence outlined in national standards. Our simulation was designed to address these gaps by offering a platform for both skill development and competency validation.

The Pilot

Over 2,000 students were invited to participate, with 187 completing the pre-survey and 37 engaging in all three parts of the pilot. To ensure participants had a foundational knowledge of UDL, initial participation was limited to students who had completed at least two terms of study. However, the criteria were later expanded to include paraprofessionals early in their programs, responding to their strong interest in joining the pilot.

The simulation, developed quickly and efficiently using the Playlab prototyping tool, presents educators with classroom scenarios and student profiles, prompting them to make instructional decisions based on UDL principles. Participants record their sessions while thinking out loud, offering valuable insight into their decision-making process. The ease of prototyping in Playlab allowed us to quickly integrate feedback from internal testers, accelerating our development cycle.

Practicing What They’ve Learned

Before using the simulation, students rated their confidence in differentiating instruction. Afterward, those same students reported a significant increase in confidence across all measured areas.

In their own words, participants described the simulation as “realistic,” “thought-provoking,” and “affirming.” One participant noted:

“I thought I had no idea what I was doing, but the experience helped me realize I did!”

Others highlighted the value of feedback and the tool’s ability to help them better support students with a variety of needs—including those with visual impairments and other exceptionalities.

“It helped to put actual scenarios to work and figure out ideas on the spot for situations that could really happen.”

From Confidence to Competence

Students weren’t just more confident—they were learning to think differently. The structured feedback loop helped learners reflect, iterate, and validate their instincts. One participant shared:

“The feedback portion of this activity was a great tool…while also making me feel confident each time I was told that my responses were proficient or exemplary.”

Even when the simulation pushed them out of their comfort zones, learners appreciated the opportunity to stretch:

“When the simulation would ask me to expand on answers, I noticed that I would have to sit and really think before responding at that point.”

This balance of affirmation and challenge encouraged deeper engagement, helping learners shift from passive knowledge to active skill-building.

Key Takeaways

Confidence can shift more quickly than we expect. After just 20 minutes with the simulation, participants reported meaningful gains in confidence across all four UDL skill areas we measured. We didn't expect that. Practice environments are often framed as slow-burn skill builders, but this pilot suggested that even brief, focused interactions can help learners recognize competence they already have, especially those with prior classroom experience.

Experienced practitioners seek ways to demonstrate their knowledge. We originally limited participation to students who had completed at least two terms of study. However, we received inbound requests to join the pilot from paraprofessionals who were just starting the program. Ultimately, twelve of our 37 participants were working paraprofessionals. That demand signal reinforced our hypothesis: people with hands-on experience are eager for tools that enable them to demonstrate, not just develop, their skills.

Validation matters as much as instruction. We expected participants to value feedback that pushed their thinking. They did. But what came through just as strongly was the value of confirmation — moments when the simulation reflected back that their instincts were sound. 

Voice interaction would lower the barrier. Several participants suggested adding speech-to-text functionality to the tool. In hindsight, that makes sense: if we want to meet working adults where they are, the interaction should feel as effortless as a conversation, not as laborious as a written assignment.

In sum, this pilot underscores more than just the promise of simulation. It reflects a broader opportunity to rethink how we equip educators with practical, confidence-building tools.

Rethinking Capacity in the Age of AI

As we strive to deliver more personalized and inclusive learning experiences, the path forward may not lie solely in vendor catalogs. This beta test demonstrated what’s possible when we take an active role in building tools grounded in learning science, aligned with our mission, and responsive to the realities of modern learners.

Simulation is one of the most effective ways to support skill development; however, it has long been perceived as costly, logistically complex, and challenging to scale, particularly in online settings and for adult learners. Generative AI changes that. With the right design and infrastructure, we can now prototype, test, and iterate on immersive learning experiences that were previously unthinkable just a few years ago.

This pilot included a tool that we will continue to develop and refine in various contexts and use cases. Starting small and learning through iteration is a valuable way for institutions to explore how emerging tools like AI can enhance instruction. With thoughtful design and collaboration, colleges and universities can begin to build and test tools that support their unique student populations, treating technology not as a replacement for human connection, but as a way to extend and strengthen it.

A Takeaway for Others Exploring This Space

If there's one lesson we'd offer, it's this: think about which skills in your field are high-stakes but hard to practice, then ask how AI might create low-stakes opportunities to rehearse them. For us, that was inclusive teaching. For others, it might be clinical judgment, difficult conversations, or leadership under pressure. The technology is finally catching up to the pedagogy.