zbai at cs.rochester.edu
3007 Wegmans Hall
Dept. of Computer Science
University of Rochester
I am an Assistant Professor in the ROCHCI group in the Department of Computer Science, University of Rochester. My research focuses on creating embodied and intelligent interfaces that transcend learning, communication and wellbeing for people with diverse abilities and backgrounds. Research fields that I have explored so far include human-computer interaction, augmented reality, tangible user interface, embodied conversational agent, technology-enhanced collaborative learning, and assistive technology. My work is published in premier human-computer interaction and learning science conferences such as CHI, ISMAR, IDC, IVA, and AIED, with two best paper nomination awards.
I received my Ph.D. degree from the Graphics & Interaction Group at the University of Cambridge in 2015 and was a postdoctoral fellow of the Human-Computer Interaction Institute and Language Technology Institute at Carnegie Mellon University before joining the University of Rochester.
I am recruiting PhD students to work with me from Fall 2020. I am particularly looking for students who
are passionate about designing and building future user interfaces to improve learning (e.g. STEM,
social-emotional learning) and quality of life (e.g. people with disabilities, elderly), with solid computer programming skills, experience in user interface design and/or evaluation. Experience in (one or more of) Augmented and Virtual Reality, game development, embedded systems, machine learning, natural language processing and computer vision is preferred. Please don’t hesitate to contact me if you would like to discuss potential research opportunities. For more information, see https://zhenbai.io/prospective-students
Most of my projects focus on creating playful and collaborative interfaces that cultivate the development of imaginative, social and curious minds for a wide range of ages, abilities and backgrounds. To address the situatedness and embodiment of cognition in social contexts, I explore the design space spanning augmented reality, tangible user interfaces, and embodied conversational agent, and embrace interdisciplinary theories and technologies in machine learning, natural language processing, computer vision, and theories in cognitive, social, and learning sciences.
Can a Child-like Virtual Peer Elicit Curiosity in Small Group Learning? [IVA'18]
We developed an intelligent virtual child as a peer collaborator to elicit curiosity in a multiparty educational game called Outbreak with children in 5th and 6th grade. We applied a child-centered data-driven approach to design a virtual child who is age appropriate, gender and race ambiguous, and demonstrates co-equal intelligence and behaviors to the target child group. Being an equal peer is essential to elicit cognitive dissonance to evoke curiosity as children tend to challenge and compare each other’s ideas but may simply accept adults’ ideas due to their high knowledge authority.
Curiosity is an intrinsic motivation for knowledge seeking and a vital socio-emotional learning skill that promotes academic performance and lifelong learning. Social interaction plays an important role in learning, but the social account of curiosity is largely unexplored. We developed a comprehensive theoretical framework of curiosity in social contexts using a theory-driven approach based on psychology, learning sciences and group dynamics, and a data-driven approach based on empirical observation of small-group science activity in the lab, STEM classrooms and informal learning environments for children aged 9-14 years old from underrepresented groups in STEM education. Furthermore, we developed a computational model, using sequential behavior pattern mining, that can predict the dynamics of curiosity including instantaneous changes in individual curiosity and convergence of curiosity across group members
Can AR Storytelling Enhance Theory of Mind? [CHI'15]
Social play with imaginary characters and other playmates helps children develop a key socio-cognitive ability called theory of mind, which enables them to understand their own and other people's thoughts and feelings. In this project, I developed a collaborative Augmented Reality storytelling system that supports divergent thinking, emotion reasoning and communication through enhanced emotion and pretense enactment.
Children with autism spectrum condition often experience difficulty in engaging in pretend play, which is a common childhood activity that scaffolds key cognitive and social development such as language, social reasoning and communication. In this project, I developed the first Augmented Reality system that supports preschool children diagnosed with high functioning autism to conceptualize symbolic thinking and mental flexibility in a visual and tangible way.