QingXiao Zheng

Hello beautiful people! 

I’m a fifth-year Ph.D. candidate in the School of Information Sciences at the University of Illinois at Urbana-ChampaignI conduct research on Human-AI Interaction. I am advised by Dr. Yun Huang at the Social Computing Systems Lab, and I am also closely collaborating with Dr. Mike Yao in technology and social behaviors studies, and Dr. Freeman Guo in VR/MR related projects.

 

Communication serves as the backbone of human connections, shaping both personal and professional relationships. I am committed to fostering synergy between humans and AI by developing and evaluating these AI agents that strengthen human communication.  I posit AI agents as social actors and mediators in forming reciprocal relationships with, and augmenting, humans. 

 

I collaborate with social sciences and computer science researchers and closely work with domain experts in the industry.  “See, Feel, Think, Act” is my simple yet profound formula for navigating life’s journey. I welcome research collaborations and mentoring opportunities. Let’s connect for shared projects!

my ai cLONE

qzheng14[at]illinois[dot]edu

About

Before PhD, I held positions for several years as a Lead of Data Science and Product Manager in the AI industry. I worked closely with domain experts, a collaboration style I maintained throughout my doctoral studies.

Received my educational trainings in emerging media technologies (e.g., Virtual Reality, social media) at the Chinese University of Hong Kong and the Communication University of China

Have a profound interest in all facets of generative AI and primarily published at CHI and CSCW.

Share my writing time with my wonderful dog, who appears to be ever-curious!

NEWS

2024-03-12: Traveling to New York to present two projects.

2023-09-20: Scheduled to present at CSCW ’23 this October in Minnesota!

2023-08-27: I am actively looking for a position in HCI research and am open to exploring exciting opportunities!

2023-08-21: Thrilled to be co-instructing IS 226: Introduction to HCI this fall and looking forward to the array of class activities and student projects planned!

Research Highlights

AI4ExceptionalEDU: Human-AI Co-Creation

The project, funded by NSF and IES,  spanning multiple phases, focuses on collaboration with speech-language pathologists to co-design and evaluate AI systems aimed at supporting interventions for children with speech delays.

Qingxiao Zheng,  Abhinav Choudhury, Yun Huang, et al.

Service AI: Human-AI Co-Creation

This empirical study serves as a primer for interested service providers to determine if and how Large Language Model (LLM) technology will be integrated for their practitioners and the broader community. The insights pave the way for synergistic and ethical human-AI co-creation in service contexts.

GitHub repo available [here].

Qingxiao Zheng, Zhongwei Xu, Abhinav Choudhury, Yuting Chen, Yongming Li,  Yun Huang.

AI-Clones: Cognitive Impacts on Self-Training

This ongoing project with multiple phases. The project explores the impact of AI-generated self-clones on improving speech skills. Mixed-design experiment was used, involving ability-diverse population.

Qingxiao Zheng and Yun Huang.

Safety Risks
Safety Risks
Behaviors
Protection Mechanisms
Design Implications
Previous slide
Next slide

[CSCW'23] Avatar-Based Social Interaction: Risk Behavioral Cues

RQ:  How do people (e.g., victims, attackers, bystanders, or spectators) respond to safety risks posed by virtual avatars, and what are the design implications for avatar-based human-human interactions?

Understanding emerging safety risks in nuanced social VR spaces and how existing safety features are used is crucial for the future development of safe and inclusive 3D social worlds. Prior research on safety risks in social VR is mainly based on interview or survey data about social VR users’ experiences and opinions, which lacks “in-situ observations” of how individuals react to these risks. Using two empirical studies, this paper seeks to understand safety risks and safety design in social VR. In Study 1, we investigated 212 YouTube videos and their transcripts that document social VR users’ immediate experiences of safety risks as victims, attackers, or bystanders. We also analyzed spectators’ reactions to these risks shown in comments to the videos. In Study 2, we summarized 13 safety features across various social VR platforms and mapped how each existing safety feature in social VR can mitigate the risks identified in Study 1. Based on the uniqueness of social VR interaction dynamics and users’ multi-modal simulated reactions, we call for further rethinking and reapproaching safety designs for future social VR environments and propose potential design implications for future safety protection mechanisms in social VR.

Qingxiao Zheng, Shengyang Xu, Lingqing Wang, Yiliu Tang, Rohan C. Salvi, Guo Freeman, and Yun Huang

[CHI'23 LBW] Case Study: Bringing UX Metrics to Participatory Design

RQ: How can we support multi-stakeholders to shape and create their own AI-mediated experiences?

Participatory Design (PD) aims to empower users by involving them in various design decisions. However, it was found that the PD’s evaluation criteria are usually set by the product team and used only at the end of a design process, without adequate user participation. To address this issue, we proposed introducing UX evaluation metrics into design materials at the participatory design INPUT phase. Using a case study of designing a chatbot for community members to report safety incidents, we studied the impact of this approach with 58 participants from two workshops. Our results showed that the integration of UX evaluation metrics efciently rationalized participants’ contributions and helped identify key evaluation metrics when setting values for new AI systems, enhancing PD workshop insights. In addition to examining the use of the Program Theory Model to explain PD, our empirical investigation added a new dimension to this model.

Qingxiao Zheng and Yun Huang

[CHI'22] Lit Review: UX Framework of Human-AI Interaction

RQ: When and how does AI engage with humans? What are the UX effects of one-on-one (dyadic AI) and multi-party (polyadic AI) interactions?

Early conversational agents (CAs) focused on dyadic human-AI interaction between humans and the CAs, followed by the increasing popularity of polyadic human-AI interaction, in which CAs are designed to mediate human-human interactions. CAs for polyadic interactions are unique because they encompass hybrid social interactions, i.e., human-CA, human-to-human, and human-to-group behaviors. However, research on polyadic CAs is scattered across different fields, making it challenging to identify, compare, and accumulate existing knowledge. To promote the future design of CA systems, we conducted a literature review of ACM publications and identified a set of works that conducted UX (user experience) research. We qualitatively synthesized the effects of polyadic CAs into four aspects of human-human interactions, i.e., communication, engagement, connection, and relationship maintenance. Through a mixed-method analysis of the selected polyadic and dyadic CA studies, we developed a suite of evaluation measurements on the effects. Our findings show that designing with social boundaries, such as privacy, disclosure, and identification, is crucial for ethical polyadic CAs. Future research should also advance usability testing methods and trust-building guidelines for conversational AI.

Qingxiao Zheng, Yiliu Tang, Yiren Liu, Weizi Liu, and Yun Huang

[CSCW'21] Navigating Social Boundaries: Chatbot-Mediated Communication

RQ: How can we design a chatbot to mediate emotional communication?

Many couples experience long-distance relationships (LDRs), and “couple technologies” have been designed to influence certain relational practices or maintain them in challenging situations. Chatbots show great potential in mediating people’s interactions. However, little is known about whether and how chatbots can be desirable and effective for mediating LDRs. In this paper, we conducted a two-phase study to design and evaluate a chatbot, PocketBot, that aims to provide effective interventions for LDRs. In Phase I, we adopted an iterative design process by conducting need-finding interviews to formulate design ideas and piloted the implemented PocketBot with 11 participants. In Phase II, we evaluated PocketBot with eighteen participants (nine LDR couples) in a week-long field trial followed by exit interviews, which yielded empirical understandings of the feasibility, effectiveness, and potential pitfalls of using PocketBot. First, a knock-on-the door feature allowed couples to know when to resume an interaction after evading a conflict; this feature was preferred by certain participants (e.g., participants with stoic personalities). Second, a humor feature was introduced to spice up couples’ conversations. This feature was favored by all participants, although some couples’ perceptions of the feature varied due to their different cultural or language backgrounds. Third, a deep talk feature enabled couples at different relational stages to conduct opportunistic conversations about sensitive topics for exploring unknowns about each other, which resulted in surprising discoveries between couples who have been in relationships for years. Our findings provide inspirations for future conversational-based couple technologies that support emotional communication.

Qingxiao Zheng, Daniela Markazi, Yiliu Tang, and Yun Huang

Past Projects (Industry)

AI Engine: Jove Arch
JoveArch is an AI-based financial platform designed to provide secure, multi-algorithm modeling services in the cloud. As a next-generation quantitative computing engine, JoveArch employs enhanced analysis, artificial intelligence, and data governance strategies and can be applied to investment transactions, commodity research, and industry regulation.
Date Service: Yee Sight
YeeSight is a data platform providing one-and-for-all information processing solutions for in-depth analysis of cross-language texts. It leverages a global multilingual text database & social media) with hundreds of billions of entries to build multilingual NLP algorithms for word segmentation, part-of-speech tagging, named entity recognition, sensitivity analysis, sentiment analysis, automatic summarization, key word extraction, text classification, text quality assessment, hotspot clustering, event element extraction, and knowledge graph building.
AI Editing: Smart News Base
Smart News Base is an AI-driven data platform that provides one-stop news editing services for journalists. Its intelligent writing feature is supported by large-scale news and cross-language social media sources. Leveraging neural network machine translation, it also aids editors in global news reporting. "Feng Mian News" is using it!
Global Risk Report
The Global Risk research project transformed unstructured data from around the world into structured data using AI technologies such as data mining, text mining, and machine learning. Through early corpus tagging, we subdivided major global risks from June 2017 to May 2018 into 30 subclasses according to the World Economic Forum, established the data model of global risk, and conducted network analysis. In our analysis, we illustrated probabilities and forecasted the potential influences in the coming years. A report was released at "Rise of Data Democracy," the 2018 World Economic Forum Summer Davos in Tianjin.
Data Analysis: Social Media
This project explored information dissemination in mobile social networks. The methods used could be beneficial for organizations seeking to analyze and identify problems in online public opinion dissemination. Two topics were studied: “Virtual Reality Brands Community” and “Network Attack.” Journal papers were published in Information Discovery and Delivery (2020) and Data Analysis and Knowledge Discovery (2019).
PoemAR: Concepts Learning
This project focused on using Augmented Reality (AR) for situational learning in Chinese ancient poetry. We developed an application depicting scenes from the poem Jiangxue, enriched with music and recitation, aiming to evoke student emotions, a crucial aspect in learning ancient poems. Traditional methods often fragment poems, hindering comprehension and emotional connection. The emphasis was on understanding imagery and fostering aesthetic appreciation in ancient poetry learning.

Publication

2023

Zheng, Qingxiao; Xu, Shengyang; Wang, Lingqing; Tang, Yiliu; Salvi, Rohan C; Freeman, Guo; Huang, Yun

Understanding safety risks and safety design in social VR environments Journal Article

In: Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW1, pp. 1–37, 2023.

BibTeX

Zheng, Qingxiao; Huang, Yun

"Begin with the end in mind": Incorporating UX evaluation metrics into design materials of participatory design Proceedings Article

In: CHI EA ’23: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 2023.

BibTeX

2022

Zheng, Qingxiao; Tang, Yiliu; Liu, Yiren; Liu, Weizi; Huang, Yun

UX research on conversational human-AI interaction: A literature review of the ACM digital library Proceedings Article

In: In Proceedings of the 2022 CHI conference on human factors in computing systems, 2022.

BibTeX

Zheng, Qingxiao; Tue, Ngoc Do; Wang, Lingqing; Yun, Huang

Facing the illusion and reality of safety in social VR Proceedings Article

In: CHI Conference on Human Factors in Computing Systems Extended Abstract and Proceedings of the 1st Workshop on Novel Challenges of Safety, Security and Privacy in Extended Reality, 2022.

BibTeX

2021

Zheng, Qingxiao; Markazi, Daniela M; Tang, Yiliu; Huang, Yun

“PocketBot is like a knock-on-the-door!”: Designing a chatbot to support long-distance relationships Journal Article

In: Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. CSCW2, pp. 1–28, 2021.

BibTeX

2020

Wang, Xiwei; Xing, Yunfei; Wei, Yanan; Zheng, Qingxiao; Xing, Guochun

Public opinion information dissemination in mobile social networks–taking Sina Weibo as an example Journal Article

In: Information Discovery and Delivery, 2020.

BibTeX

Wang, Duo; Wang, Xiwei; Zheng, Qingxiao; Tao, Bingxin; Zheng, Guomeng

How interaction paradigms affect user experience and perceived interactivity in virtual reality environment Proceedings Article

In: International Conference on Human-Computer Interaction, pp. 223–234, Springer International Publishing Cham 2020.

BibTeX

Zheng, Qingxiao; Bashir, Masooda

Investigating the differences in privacy news based on grounded theory Proceedings Article

In: International Conference on Applied Human Factors and Ergonomics, pp. 528–535, Springer, Cham 2020.

BibTeX

2019

Zheng, Qingxiao; Chen, Hsuan-Ting

How virtual reality technology influences news? Investigating VR news, TV news and text news reports in sense of presence and perceived news effects Proceedings Article

In: 69th Conference of International Communication Association (ICA2019), 2019.

BibTeX

My dog’s AI Clone.

 

🐾 Yay, You’ve Reached The Tail End! 🐶 See you next time!