About

Diana is a multimodal situation- and context-aware intelligent agent developed for embodied human-computer interaction. She can not only hear and understand linguistic instructions, but can also see and understand the user’s gestures and the context she inhabits. She is also embodied, meaning she possesses effectors that allow her to manipuate objects in her world, meaning the Diana and a human can interact using language and gesture to collaborate on shared tasks. Embodied HCI integrates language, perception, and situated grounding, uses multimodal semantics to encode context in real-time, and provides rich contextual parsing and examinable environmental awareness. Watch the demo video on the main page and other example videos below!

Diana Version 1.0. This earlier architecture was limited to turn-taking.

Diana is interruptable and correctable.

People can interact with Diana using only gestures, only speech, or any combination in between.