Home

zhenbai_portrait-square-small

zbai at cs.rochester.edu
(585) 275-7747
3007 Wegmans Hall
Dept. of Computer Science
University of Rochester

I am an Assistant Professor co-direct the ROCHCI group in the Department of Computer Science, University of Rochester. My research focuses on creating embodied and intelligent interfaces that transcend learning, communication and wellbeing for people with diverse abilities and backgrounds. Research fields that I have explored so far include Human-Computer Interaction, Augmented and Virtual Reality, Tangible User Interface, Embodied Conversational Agent, Technology-Enhanced Collaborative Learning, and Assistive Technology. My work is published in premier HCI, AR/VR and learning science conferences such as CHI, ISMAR, IDC, IVA, and AIED, with two best paper nomination awards.

I received my Ph.D. degree from the Graphics & Interaction Group at the University of Cambridge in 2015 and was a postdoctoral fellow of the Human-Computer Interaction Institute and Language Technology Institute at Carnegie Mellon University before joining the University of Rochester.

Work with me
I am recruiting PhD students to work with me. I am particularly looking for students who
are passionate about designing and building future user interfaces to improve learning (e.g. AI Education, STEM,
social-emotional learning), quality of life (e.g. people with disabilities, elderly) and Human-AI collaboration, with solid computer programming skills, experience in user interface design and/or evaluation. Experience in (one or more of) Augmented and Virtual Reality, machine learning, natural language processing and computer vision is preferred.
Please don’t hesitate to contact me if you would like to discuss potential research opportunities. For more information, see https://zhenbai.io/prospective-students.

Selected Projects

My research focuses on creating playful and collaborative interfaces that cultivate creative, social and curious minds for a wide range of ages, abilities and backgrounds. To address the situatedness and embodiment of cognition in social contexts, I explore the design space spanning Augmented Reality, Tangible User Interfaces, and Embodied Conversational Agent, and embrace interdisciplinary theories and technologies in machine learning, natural language processing, computer vision, and theories in cognitive, social, and learning sciences.

interaction2

How to Support Accessible ML Learning Experiences or K-12 Students and Teachers? [IDC'20] [AIED'21]

There is an increasing need to prepare young learners to be Artificial Intelligence (AI) capable for the future workforce and everyday life. Machine Learning (ML), as an integral subfield of AI, has become the new engine that revolutionizes practices of knowledge discovery. We developed a playful learning environment - SmileyCluster - that utilizes emoji-based data visualization and exploratory visual comparison techniques to help K-12 students and teachers with limited math and computing skills to obtain basic knowledge of K-means clustering. Findings in a pre-college summer school show the effectiveness of SmileyCluster in supporting students to engage in authentic learning activities that combine learning key ML concepts such as multi-dimensional feature space and similarity comparison, and using ML as a new knowledge discovery tool to explore STEM subjects through data-driven scientific inquiries.

TIPs system

How to Support Non-Intrusive Communication between Deaf Child and Hearing Parents [ISMAR'19]

More than 90% percent of Deaf and Hard of Hearing (DHH) infants in the US are born to hearing parents. They are at severe risk of language deprivation, which may lead to life-long impact on linguistic, cognitive and socio-emotional development. In this study, we present a proof-of-concept visual augmentation prototype utilizing the Augmented Reality (AR) lamp metaphor that aims to support context-aware and non-intrusive parent-child interaction using American Sign Language (ASL), with adaptation to joint-attention strategies that match with the child's communication modality. The proposed prototype enables empirical studies to collect in-depth design critiques and preliminary usability evaluation from domain experts, novice ASL learners, and hearing parents with DHH children.

summer_camp_demo-small

Can a Child-like Virtual Peer Elicit Curiosity in Small Group Learning? [IVA'18]

We developed an intelligent virtual child as a peer collaborator to elicit curiosity in a multiparty educational game called Outbreak with children in 5th and 6th grade. We applied a child-centered data-driven approach to design a virtual child who is age appropriate, gender and race ambiguous, and demonstrates co-equal intelligence and behaviors to the target child group. Being an equal peer is essential to elicit cognitive dissonance to evoke curiosity as children tend to challenge and compare each other’s ideas but may simply accept adults’ ideas due to their high knowledge authority.

data_collection_RGM_classroom_assemble-small

Can Social Interaction Foster Curiosity? [ECTEL’17a] [ECTEL’17b] [AIED'18]

Curiosity is an intrinsic motivation for knowledge seeking and a vital socio-emotional learning skill that promotes academic performance and lifelong learning. Social interaction plays an important role in learning, but the social account of curiosity is largely unexplored. We developed a comprehensive theoretical framework of curiosity in social contexts using a theory-driven approach based on psychology, learning sciences and group dynamics, and a data-driven approach based on empirical observation of small-group science activity in the lab, STEM classrooms and informal learning environments for children aged 9-14 years old from underrepresented groups in STEM education. Furthermore, we developed a computational model, using sequential behavior pattern mining, that can predict the dynamics of curiosity including instantaneous changes in individual curiosity and convergence of curiosity across group members

FingAR-small

Can AR Storytelling Enhance Theory of Mind? [CHI'15]

Social play with imaginary characters and other playmates helps children develop a key socio-cognitive ability called theory of mind, which enables them to understand their own and other people's thoughts and feelings. In this project, I developed a collaborative Augmented Reality storytelling system that supports divergent thinking, emotion reasoning and communication through enhanced emotion and pretense enactment.

Autism_AR-small

Can Pretense be Externalized through an AR Looking Glass? [ISMAR’13] [TVCG'15]

Children with autism spectrum condition often experience difficulty in engaging in pretend play, which is a common childhood activity that scaffolds key cognitive and social development such as language, social reasoning and communication. In this project, I developed the first Augmented Reality system that supports preschool children diagnosed with high functioning autism to conceptualize symbolic thinking and mental flexibility in a visual and tangible way.