About

Lei Zhang is a Postdoctoral Research Associate in the HCI group within the Computer Science Department at Princeton University, working with Andrés Monroy-Hernández. He recently completed his Ph.D. at the University of Michigan, School of Information, where he was advised by Steve Oney and Anhong Guo. His research focuses on human-computer interaction, particularly in designing and building creativity support tools that enable end-users to craft immersive experiences in Augmented Reality (AR) and Virtual Reality (VR). He invents novel content creation techniques and systems, and studies how they can enhance people's creativity, collaboration, and communication.

Lei’s work has been published at top-tier HCI venues including CHI, UIST, and CSCW. His first-authored papers have received a best paper award at CSCW 2022 and a best short paper award at VL/HCC 2019. Lei interned at Snap Research twice (Summer 2021 & 2022), working with Dr. Andrés Monroy-Hernández, Dr. Rajan Vaish, and Dr. Fannie Liu. He holds a BE in Software Engineering from Shanghai Jiao Tong University. Outside of work, Lei is passionate about music production, skateboards, and 35mm film photography.

📢 Lei will join New Jersey Institute of Technology (NJIT) as a tenure-track assistant professor, starting Spring 2025. He will be recruiting students at all levels (post-docs, PhDs, masters, undergrads, and visitors) to work on exciting research projects. Please check out our join-us page for more details.

📩 raynez@princeton.edu

Bio

Publications

Thumbnail for VRCopilot: Authoring 3D Layouts with Generative AI Models in VR

VRCopilot: Authoring 3D Layouts with Generative AI Models in VR

Lei Zhang, Jin Pan, Jacob Gettig, Steve Oney, Anhong Guo.
UIST 2024
We introduce VRCopilot, a mixed-initiative system that integrates pre-trained generative AI models into immersive authoring, to facilitate human-AI co-creation in VR. VRCopilot presents multimodal interactions to support rapid prototyping and iterations with AI, and intermediate representations such as wireframes to augment user controllability over the created content. We evaluated manual, semi-automatic, and fully automatic creation, and found that semi-automatic creation with wireframes enhanced the creation experience and user agency compared to fully automatic approaches.
Thumbnail for Jigsaw: Authoring Immersive Storytelling Experiences with Augmented Reality and Internet of Things

Jigsaw: Authoring Immersive Storytelling Experiences with Augmented Reality and Internet of Things

Lei Zhang, Daekun Kim, Youjean Cho, Ava Robinson, Yu Jiang Tham, Rajan Vaish, Andrés Monroy-Hernández.
CHI 2024
We introduce Jigsaw, a system that enables novices to both consume and create immersive stories that harness virtual and physical augmentations. Jigsaw achieves this through the novel fusion of mobile AR with off-the-shelf Internet-of-things (IoT) devices. We evaluated the consumption and creation of immersive stories through a qualitative user study with 20 participants, and found that end-users were able to create immersive stories and felt highly engaged in the playback of three stories. However, sensory overload was one of the most notable challenges across all experiences.
Thumbnail for VRGit: A Version Control System for Collaborative Content Creation in Virtual Reality

VRGit: A Version Control System for Collaborative Content Creation in Virtual Reality

Lei Zhang, Ashutosh Agrawal, Steve Oney, Anhong Guo.
CHI 2023
We introduce VRGit, a new version control system for collaborative content creation in VR. VRGit enables novel visualization and interactions for version control commands such as history navigation, commits, branching, previewing, and re-using. VRGit is also designed to facilitate real-time collaboration by providing awareness of users’ activities and version history through concepts of portals and shared history visualizations.
Thumbnail for Auggie: Encouraging Effortful Communication through Handcrafted Digital Experiences

Auggie: Encouraging Effortful Communication through Handcrafted Digital Experiences

Lei Zhang*, Tianying Chen*, Olivia Seow*, Tim Chong, Sven Kratz, Yu Jiang Tham, Andrés Monroy-Hernández, Rajan Vaish, and Fannie Liu.
CSCW 2022 (🏆 Best Paper Award)
Digital communication is often brisk and automated. From auto-completed messages to “likes,” research has shown that such lightweight interactions can affect perceptions of authenticity and closeness. On the other hand, effort in relationships can forge emotional bonds by conveying a sense of caring and is essential in building and maintaining relationships. To explore effortful communication, we designed and evaluated Auggie, an iOS app that encourages partners to create digitally handcrafted Augmented Reality (AR) experiences for each other.
Thumbnail for FlowMatic: An Immersive Authoring Tool for Creating Interactive Scenes in Virtual Reality

FlowMatic: An Immersive Authoring Tool for Creating Interactive Scenes in Virtual Reality

Lei Zhang and Steve Oney.
UIST 2020
In this paper, we introduces FlowMatic, an immersive authoring tool that raises the ceiling of expressiveness by allowing novice programmers to specify reactive behavior.
Thumbnail for Studying the Benefits and Challenges of Immersive Dataflow Programming

Studying the Benefits and Challenges of Immersive Dataflow Programming

Lei Zhang and Steve Oney.
VL/HCC 2019 (🏆 Best Short Paper Award)
In this paper, we study the benefits and challenges of immersive dataflow authoring, a paradigm that allows users to build VR applications using dataflow notation while immersed in the VR world.
Thumbnail for Cubicle: An Adaptive Educational Gaming Platform for Training Spatial Visualization Skills

Cubicle: An Adaptive Educational Gaming Platform for Training Spatial Visualization Skills

Ziang Xiao, Helen Wauck, Zeya Peng, Hanfei Ren, Lei Zhang, Shiliang Zuo, Yuqi Yao, and Wai-Tat Fu
IUI 2018
In this paper, we study gamification as a way to motivate first year engineering students to take part in an online workshop designed to train their spatial visualization skills. Our game contains eight modules, each designed to train a different component of spatial visualization.

News

08/2024: Relocated to New Jersey. 🏠

06/2024: Officially Dr. Zhang! 🎓

05/2024: I'll present Jigsaw at CHI 2024. 🏄🏻‍♂️

04/2024: I'm joining NJIT as an Assistant Professor in Spring 2025, after a one-term postdoc at Princeton's Computer Science.

© Lei Zhang 2024 — Built with ❤️‍🩹