educationtechnologyinsights
| | MARCH 20259While the benefits of addressing inequity in our society have been profound, honest missteps in speaking to others can now be live-streamed and preserved for all time, as if we are static beingsAmong all the buzz around the Metaverse and Web3.0, a variety of tools and workflows are currently advocating the landscapes of VR/AR and artificial intelligence (AI) to provide training and education opportunities for the improvisational side of working with others. The intersection of these bleeding-edge technologies results in realistic simulations of human conversation. It allows us to re-create detailed experiences in the human condition, providing natural verbal (rate of speech, tone) and non-verbal (body posture, eye contact) communication with a range of life-like avatars. Harnessing dynamic data into machine learning profiles and allowing these avatars to understand your specific communication style and history of your interactions provides life-like simulations around various difficult topics (personal, medical, et al.). This is a huge leap from what most of us have experienced from the basic applications of this technology currently seen in online chatbots and A.I.-driven "smart-home" apps.These simulations provide a safe place to explore communication techniques, both new and old, allowing us to skill up before entering these "one-shot" conversations with students, patients, peers, and strangers. Thus, we are able to hone important soft skills in advance of critical first impressions and the delicate handling of difficult topics.Furthermore, these advanced technologies allow us to simulate not just the now but also what a follow-up conversation may look like in a few days or a few years as we simulate key moments of an individual's lifespan. This further allows us to emulate how understandings, personalities, and the techniques needed to interact appropriately evolve in individuals over time, providing a bigger picture of holistic care.Like most disruptive technology, the vulnerability lies in anticipating where these technologies will lead. It is reasonable to expect many deeper applications of this work as AI-enabled AR avatars become better at learning our communication styles. This has the potential to dynamically challenge and continue to educate upwards, and we are left with endless potential for branching applications.For example, we can connect a doctor with a very detailed communication style (rate of speech, eye contact, words used, pauses, etc.) with a patient determined to best receive that nuanced communication style. For another example, imagine parents interacting with an avatar to build skills in discussing depression with a teenager, who can pivot and experience the simulation from the point of view of the teenager. Allowing a parent to experience being talked to by adults using a variety of language and being limited by responses from that child's POV could quickly build empathy and more authentic understanding-- all from the repeatable, safe comfort of your device at home. While these examples are on the horizon, even closer are augmented technologies that will allow a doctor to review and collect patient medical information while maintaining eye contact and fostering a more human connection in real-time.As is often the case with emerging technology, it can be difficult to see the long-term benefits of new tools when blended with other methods. These tools, like most, belong in a metaphorical toolbox with other existing and emerging pathways to best address unique individual needs. As technology advances and aligns more closely with the realities of our world, taking a moment to pause and look beyond the week reveals an exciting landscape that consistently unveils innovative ways to enhance our interactions.
< Page 8 | Page 10 >