Pillar 02 · Research Program
A conversational AI guide — with a live avatar — that introduces opera's works, performers, and history to audiences on their own terms, using natural voice and generative video.
The AI opera docent is a conversational AI guide — deployable as a voice chatbot and, in its more advanced form, as a live avatar with synchronized video — that introduces users to opera's works, composers, performers, and history through natural dialogue.
The design philosophy is accessibility-first: the docent meets users at their level of familiarity, from complete novices to experienced opera-goers, adapting its explanations, vocabulary, and depth accordingly. Voice delivery using high-fidelity AI synthesis creates an experience that feels personal and human rather than transactional.
The avatar form — a real-time AI-generated video persona that speaks, reacts, and gestures naturally — extends the docent into an entirely new modality. Research examines whether embodied avatar interaction produces measurably different engagement outcomes compared to voice-only or text-based equivalents.
The system is deployed in partnership with Opera Verace Foundation and evaluated at collaborating opera institutions as an audience development tool offered to ticket purchasers before, during, and after performances.
Pre/post measurement of ticket purchase intent among users who interact with the docent versus a control group.
Controlled comparison of voice-only vs. avatar modalities on engagement depth, return rate, and user-reported experience.
Longitudinal study of repeat interaction rates and correlation with downstream attendance behavior tracked through STAGE.