Postdoctoral Fellow
Department of CS & Engineering
University at Buffalo, SUNY
š§Ā qingxiao[at]buffalo[dot]edu
š Buffalo, NY
Hello, beautiful people!
I’m Qingxiao, and I believe technology should amplify human potential, not replace it. As a UX/UI researcher, designer, and innovator specializing in human-centered AI, I‘m committed to pioneering responsible AI systems that democratize emerging technologies and drive meaningful societal impact.
My research interests are:
My background spans academia and industry: I hold a PhD in Information Sciences at the University of Illinois Urbana-Champaign (UIUC), served as Director of Data Science at a B2B AI company, and received academic training in social science theories and methods at the Chinese University of Hong Kong (CUHK).
“See, Feel, Think, Act” is my simple formula for navigating life’s journey. I’m excited to collaborate on research that bridges theory and practice, centers human needs, and creates AI systems that truly serve communities.
LATEST NEWS
Selected Publications as first author [Google Scholar]
Ā
Evaluating UX in the context of AIās complexity, unpredictability, and generative nature presents unique challenges. How can we support HCI researchers to create comprehensive UX evaluation plans? In this paper, we introduce EvAlignUX, a system powered by large language models and grounded in scientific literature, designed to help HCI researchers explore evaluation metrics and their relationship to research outcomes.Ā In a world where experience defines impact, we discuss the importance of shifting UX evaluation from a āmethod-centricā to a āmindset-centricā approach as the key to meaningful and lasting design evaluation.
Qingxiao Zheng, Minrui Chen, Pranav Sharma, Yiliu Tang, Mehul Oswal, Yiren Liu, Yun Huang
RQ: When and how does AI engage with humans? What are the UX effects of one-on-one (dyadic AI) and multi-party (polyadic AI) interactions?
Early conversational agents (CAs) focused on dyadic human-AI interaction between humans and the CAs, followed by the increasing popularity of polyadic human-AI interaction, in which CAs are designed to mediate human-human interactions. CAs for polyadic interactions are unique because they encompass hybrid social interactions, i.e., human-CA, human-to-human, and human-to-group behaviors. However, research on polyadic CAs is scattered across different fields, making it challenging to identify, compare, and accumulate existing knowledge. To promote the future design of CA systems, we conducted a literature review of ACM publications and identified a set of works that conducted UX (user experience) research.Ā
Qingxiao Zheng, Yiliu Tang, Yiren Liu, Weizi Liu, and Yun Huang
Participatory Design (PD) aims to empower users by involving them in various design decisions. However, it was found that the PDās evaluation criteria are usually set by the product team and used only at the end of a design process without adequate user participation. To address this issue, we proposed introducing UX evaluation metrics into design materials at the participatory design INPUT phase.Ā
Qingxiao Zheng and Yun Huang
The project, funded by NSF and IES,Ā spanning multiple phases, focuses on collaboration with speech-language pathologists to co-design and evaluate AI systems aimed at supporting interventions for children with speech delays.
Qingxiao Zheng,Ā Abhinav Choudhury, Parisa Rabbani, Yun Huang, et al.
This empirical study serves as a primer for interested service providers to determine if and how Large Language Model (LLM) technology will be integrated for their practitioners and the broader community. The insights pave the way for synergistic and ethical human-AI co-creation in service contexts.
Qingxiao Zheng, Minrui Chen, Zhongwei Xu,Ā Hyanghee Park, Yun Huang
This study examines the impact of AI-generated digital clones with self-images (AI-clones) on enhancing perceptions and skills in online presentations. A mixed-design experiment with 44 international students compared self-recording videos (self-recording group) to AI-clone videos (AI-clone group) for online English presentation practice.Ā
Qingxiao Zheng, Joy Chen,Ā Yun Huang
RQ:Ā How do people (e.g., victims, attackers, bystanders, or spectators) respond to safety risks posed by virtual avatars, and what are the design implications for avatar-based human-human interactions?
Understanding emerging safety risks in nuanced social VR spaces and how existing safety features are used is crucial for the future development of safe and inclusive 3D social worlds. Prior research on safety risks in social VR is mainly based on interview or survey data about social VR usersā experiences and opinions, which lacks āin-situ observationsā of how individuals react to these risks. Using two empirical studies, this paper seeks to understand safety risks and safety design in social VR. In Study 1, we investigated 212 YouTube videos and their transcripts that document social VR usersā immediate experiences of safety risks as victims, attackers, or bystanders. We also analyzed spectatorsā reactions to these risks shown in comments to the videos. In Study 2, we summarized 13 safety features across various social VR platforms and mapped how each existing safety feature in social VR can mitigate the risks identified in Study 1. Based on the uniqueness of social VR interaction dynamics and usersā multi-modal simulated reactions, we call for further rethinking and reapproaching safety designs for future social VR environments and propose potential design implications for future safety protection mechanisms in social VR.
RQ: How can we design a chatbot to mediate emotional communication?
Many couples experience long-distance relationships (LDRs), and ācouple technologiesā have been designed to influence certain relational practices or maintain them in challenging situations. Chatbots show great potential in mediating peopleās interactions. However, little is known about whether and how chatbots can be desirable and effective for mediating LDRs. In this paper, we conducted a two-phase study to design and evaluate a chatbot, PocketBot, that aims to provide effective interventions for LDRs. In Phase I, we adopted an iterative design process by conducting need-finding interviews to formulate design ideas and piloted the implemented PocketBot with 11 participants. In Phase II, we evaluated PocketBot with eighteen participants (nine LDR couples) in a week-long field trial followed by exit interviews, which yielded empirical understandings of the feasibility, effectiveness, and potential pitfalls of using PocketBot. First, a knock-on-the door feature allowed couples to know when to resume an interaction after evading a conflict; this feature was preferred by certain participants (e.g., participants with stoic personalities). Second, a humor feature was introduced to spice up couplesā conversations. This feature was favored by all participants, although some couplesā perceptions of the feature varied due to their different cultural or language backgrounds. Third, a deep talk feature enabled couples at different relational stages to conduct opportunistic conversations about sensitive topics for exploring unknowns about each other, which resulted in surprising discoveries between couples who have been in relationships for years. Our findings provide inspirations for future conversational-based couple technologies that support emotional communication.
Qingxiao Zheng, Daniela Markazi, Yiliu Tang, and Yun Huang
Fun Facts
Share writing time with my wonderful dog, who appears to be ever-curious.
When not experimenting on AI, Iām experimenting with new recipes in the kitchen.
My workspace is clutter-free.
My dog’s AI Clone.
š¾ Youāve Reached The Tail End š¶
See you next time š