Qingxiao Zheng (Job Search Now)

Postdoctoral Fellow

Department of CS & Engineering

University at Buffalo, SUNY

šŸ“§Ā  qingxiao[at]buffalo[dot]edu

šŸ“ Buffalo, NY

[Google Scholar] [X/Twitter] [Linkedin]

Hello, beautiful people! 

 

I’m Qingxiao, and I believe technology should amplify human potential, not replace it. As a UX/UI researcher, designer, and innovator specializing in human-centered AI, I‘m committed to pioneering responsible AI systems that democratize emerging technologies and drive meaningful societal impact.

 

My research interests are:

  • Human-AI interaction
  • Human-centered AI
  • User eXperience (UX) research
  • Service design
  • AI for social good
  • Responsible AI
 

My background spans academia and industry: hold a PhD in Information Sciences at the University of Illinois Urbana-Champaign (UIUC), served as Director of Data Science at a B2B AI company, and received academic training in social science theories and methods at the Chinese University of Hong Kong (CUHK).  

 

“See, Feel, Think, Act” is my simple formula for navigating life’s journey. I’m excited to collaborate on research that bridges theory and practice, centers human needs, and creates AI systems that truly serve communities.

LATEST NEWS

  • Nov 11, 2025: One paper accepted by AIxVR’26, another work showcasing our efforts in advancing human–AI interaction in the extended reality.
  • Nov 8, 2025: One session and one individual paper accepted by AERA’26, many thanks to my amazing collaborators!
  • Oct 20, 2025: I’ll be in Las Vegas for AECT’25. Feel free to say hi if you’re around.
  • On Aug 12: I’ll be traveling to Chicago to attend AIVO. Happy to connect while I’m there.
  • May 16, 2025: One paper on parent-child interaction accepted byĀ AECT’25.Ā 
  • May 1, 2025: One paper on state space model accepted by ICML’25.Ā 
  • April 29, 2025: One paper on automating intervention discovery accepted by IJCAI’25.
  • April 3, 2025: Three papers accepted by AIED’25, celebrating the interdisciplinary teamwork that brought these ideas to life.
  • Jan 16, 2025:Ā  Two papers accepted by CHI’25, many thanks to my amazing collaborators for their support.
  • Jan 2025: Welcoming the new year with a paper on AI clones accepted by Computers in Human Behaviors: Artificial Humans.
  • Aug 2024: To host a Special Interest Group (SIG) on “The Responsible Use of Large Multi-modal AI for Human Behavior Analysis” at CSCW’24.Ā 
  • Aug 2024: Interaction design Soap.AI accepted for the CSCW’24 Demo.
  • July 2024: To present at the 2024 ASHA Convention for using LMM for special education in Seattle.
  • May 2024: Dissertation successfully defended — officiallyšŸ‘©ā€šŸŽ“Dr. Zheng!
  • Mar 2024: Presented research on large multi-modal models for special education at the University at Buffalo (SUNY).
  • Apr 2024: Ranked “Teaching Excellent” on UIUC’s Fall 2023 List of Teachers.
  • Feb 2024: Gave a talk at Indiana University Bloomington on research in human-AI interaction, thanks to Prof. Susan Herring for the invitation.
  • Sept 2023:Ā  Presented at CSCW’23 in Minnesota on Safety Risks in Social VR.Ā 

Selected Publications as first author [Google Scholar]

Ā 

  • [CHI’25] EvAlignUX: Advancing UX Evaluation through LLM-Supported Metrics Exploration.
    Q. Zheng, M. Chen, P. Sharma, Y. Tang, M. Oswal, Y. Liu, and Y. Huang.
  • [CHI’25] Evaluating Non-AI Experts’ Interaction with AI: A Case Study in Library Context.
    Q. Zheng, M. Chen, H. Park, Z. Xu, and Y. Huang.
  • [AIED’25] Towards AI-Enhanced Speech-Language Intervention Documentation: Opportunities and Design Goals
    Q. Zheng, A. Choudhry, Z. Liu, P. Rabbani, Y. Hu, A. Olszewski, Y. Huang, and J. Xiong.
  • [CHB: Artif. Humans, 2025] Learning Through AI-Clones: Enhancing Self-Perception and Presentation Performance.
    Q. Zheng, J. Chen, and Y. Huang.
  • [CSCW’24 Companion] Towards Responsible Use of Large Multi-Modal AI to Analyze Human Social Behaviors.
    Q. Zheng, X. Lu, Q. Jin, J. Jain, H. Meadan-Kaplansky, H. Shi, J. Xiong, and Y. Huang.
  • [CSCW’24 Companion] SOAP.AI: A Collaborative Tool for Documenting Human Behavior in Videos through Multimodal Generative AI.
    Q. Zheng, P. Rabbani, D. Mansour, Y.-R. Lin, and Y. Huang.
  • [CSCW’23] Understanding Safety Risks and Safety Design in Social VR Environments.
    Q. Zheng, S. Xu, L. Wang, Y. Tang, R. Salvi, G. Freeman, and Y. Huang.
  • [CHI’23 EA] “Begin With the End in Mind”: Incorporating UX Evaluation Metrics into Design Materials of Participatory Design.
    Q. Zheng and Y. Huang.
  • [CHI’22] UX Research on Conversational Human-AI Interaction: A Literature Review of the ACM Digital Library.
    Q. Zheng, Y. Tang, Y. Liu, W. Liu, and Y. Huang.
  • [CHI’22 EA] Facing the Illusion and Reality of Safety in Social VR.
    Q. Zheng, T. Do, L. Wang, and Y. Huang.
  • [CSCW’21] “PocketBot Is Like a Knock-On-the-Door!”: Designing a Chatbot to Support Long-Distance Relationships.
    Q. Zheng, D. Markazi, Y. Tang, and Y. Huang.

Research highlights

#1 UX Evaluation of Human-AI Interaction

[CHI'25] EvalignUX

Evaluating UX in the context of AI’s complexity, unpredictability, and generative nature presents unique challenges. How can we support HCI researchers to create comprehensive UX evaluation plans? In this paper, we introduce EvAlignUX, a system powered by large language models and grounded in scientific literature, designed to help HCI researchers explore evaluation metrics and their relationship to research outcomes.Ā In a world where experience defines impact, we discuss the importance of shifting UX evaluation from a ā€œmethod-centricā€ to a ā€œmindset-centricā€ approach as the key to meaningful and lasting design evaluation.

Qingxiao Zheng, Minrui Chen, Pranav Sharma, Yiliu Tang, Mehul Oswal, Yiren Liu, Yun Huang

[CHI'22] Lit Review: UX Framework of Human-AI Interaction

RQ: When and how does AI engage with humans? What are the UX effects of one-on-one (dyadic AI) and multi-party (polyadic AI) interactions?

Early conversational agents (CAs) focused on dyadic human-AI interaction between humans and the CAs, followed by the increasing popularity of polyadic human-AI interaction, in which CAs are designed to mediate human-human interactions. CAs for polyadic interactions are unique because they encompass hybrid social interactions, i.e., human-CA, human-to-human, and human-to-group behaviors. However, research on polyadic CAs is scattered across different fields, making it challenging to identify, compare, and accumulate existing knowledge. To promote the future design of CA systems, we conducted a literature review of ACM publications and identified a set of works that conducted UX (user experience) research.Ā 

Qingxiao Zheng, Yiliu Tang, Yiren Liu, Weizi Liu, and Yun Huang

[CHI'23 LBW] Case Study: Bringing UX Metrics to Participatory Design

Participatory Design (PD) aims to empower users by involving them in various design decisions. However, it was found that the PD’s evaluation criteria are usually set by the product team and used only at the end of a design process without adequate user participation. To address this issue, we proposed introducing UX evaluation metrics into design materials at the participatory design INPUT phase.Ā 

Qingxiao Zheng and Yun Huang

#2 AI for Social Good

[CSCW'Companion]

The project, funded by NSF and IES,Ā  spanning multiple phases, focuses on collaboration with speech-language pathologists to co-design and evaluate AI systems aimed at supporting interventions for children with speech delays.

Qingxiao Zheng,Ā  Abhinav Choudhury, Parisa Rabbani, Yun Huang, et al.

[CHI'25]

This empirical study serves as a primer for interested service providers to determine if and how Large Language Model (LLM) technology will be integrated for their practitioners and the broader community. The insights pave the way for synergistic and ethical human-AI co-creation in service contexts.

Qingxiao Zheng, Minrui Chen, Zhongwei Xu,Ā  Hyanghee Park, Yun Huang

[Computers in Human Behavior: Artificial Humans]

This study examines the impact of AI-generated digital clones with self-images (AI-clones) on enhancing perceptions and skills in online presentations. A mixed-design experiment with 44 international students compared self-recording videos (self-recording group) to AI-clone videos (AI-clone group) for online English presentation practice.Ā 

Qingxiao Zheng, Joy Chen,Ā Yun Huang

#3 AI for Social Interaction
Safety Risks
Safety Risks
Behaviors
Protection Mechanisms
Design Implications

[CSCW'23] Avatar-Based Social Interaction: Risk Behavioral Cues

RQ:Ā  How do people (e.g., victims, attackers, bystanders, or spectators) respond to safety risks posed by virtual avatars, and what are the design implications for avatar-based human-human interactions?

Understanding emerging safety risks in nuanced social VR spaces and how existing safety features are used is crucial for the future development of safe and inclusive 3D social worlds. Prior research on safety risks in social VR is mainly based on interview or survey data about social VR users’ experiences and opinions, which lacks ā€œin-situ observationsā€ of how individuals react to these risks. Using two empirical studies, this paper seeks to understand safety risks and safety design in social VR. In Study 1, we investigated 212 YouTube videos and their transcripts that document social VR users’ immediate experiences of safety risks as victims, attackers, or bystanders. We also analyzed spectators’ reactions to these risks shown in comments to the videos. In Study 2, we summarized 13 safety features across various social VR platforms and mapped how each existing safety feature in social VR can mitigate the risks identified in Study 1. Based on the uniqueness of social VR interaction dynamics and users’ multi-modal simulated reactions, we call for further rethinking and reapproaching safety designs for future social VR environments and propose potential design implications for future safety protection mechanisms in social VR.

Qingxiao Zheng, Shengyang Xu, Lingqing Wang, Yiliu Tang, Rohan C. Salvi, Guo Freeman, and Yun Huang

[CSCW'21] Navigating Social Boundaries: Chatbot-Mediated Communication

RQ: How can we design a chatbot to mediate emotional communication?

Many couples experience long-distance relationships (LDRs), and ā€œcouple technologiesā€ have been designed to influence certain relational practices or maintain them in challenging situations. Chatbots show great potential in mediating people’s interactions. However, little is known about whether and how chatbots can be desirable and effective for mediating LDRs. In this paper, we conducted a two-phase study to design and evaluate a chatbot, PocketBot, that aims to provide effective interventions for LDRs. In Phase I, we adopted an iterative design process by conducting need-finding interviews to formulate design ideas and piloted the implemented PocketBot with 11 participants. In Phase II, we evaluated PocketBot with eighteen participants (nine LDR couples) in a week-long field trial followed by exit interviews, which yielded empirical understandings of the feasibility, effectiveness, and potential pitfalls of using PocketBot. First, a knock-on-the door feature allowed couples to know when to resume an interaction after evading a conflict; this feature was preferred by certain participants (e.g., participants with stoic personalities). Second, a humor feature was introduced to spice up couples’ conversations. This feature was favored by all participants, although some couples’ perceptions of the feature varied due to their different cultural or language backgrounds. Third, a deep talk feature enabled couples at different relational stages to conduct opportunistic conversations about sensitive topics for exploring unknowns about each other, which resulted in surprising discoveries between couples who have been in relationships for years. Our findings provide inspirations for future conversational-based couple technologies that support emotional communication.

Qingxiao Zheng, Daniela Markazi, Yiliu Tang, and Yun Huang

Past Projects (Industry)

AI Engine: Jove Arch
JoveArch is an AI-based financial platform designed to provide secure, multi-algorithm modeling services in the cloud. As a next-generation quantitative computing engine, JoveArch employs enhanced analysis, artificial intelligence, and data governance strategies and can be applied to investment transactions, commodity research, and industry regulation.
Date Service: Yee Sight
YeeSight is a data platform providing one-and-for-all information processing solutions for in-depth analysis of cross-language texts. It leverages a global multilingual text database & social media) with hundreds of billions of entries to build multilingual NLP algorithms for word segmentation, part-of-speech tagging, named entity recognition, sensitivity analysis, sentiment analysis, automatic summarization, key word extraction, text classification, text quality assessment, hotspot clustering, event element extraction, and knowledge graph building.
AI Editing: Smart News Base
Smart News Base is an AI-driven data platform that provides one-stop news editing services for journalists. Its intelligent writing feature is supported by large-scale news and cross-language social media sources. Leveraging neural network machine translation, it also aids editors in global news reporting. "Feng Mian News" is using it!
Global Risk Report
The Global Risk research project transformed unstructured data from around the world into structured data using AI technologies such as data mining, text mining, and machine learning. Through early corpus tagging, we subdivided major global risks from June 2017 to May 2018 into 30 subclasses according to the World Economic Forum, established the data model of global risk, and conducted network analysis. In our analysis, we illustrated probabilities and forecasted the potential influences in the coming years. A report was released at "Rise of Data Democracy," the 2018 World Economic Forum Summer Davos in Tianjin.
Data Analysis: Social Media
This project explored information dissemination in mobile social networks. The methods used could be beneficial for organizations seeking to analyze and identify problems in online public opinion dissemination. Two topics were studied: ā€œVirtual Reality Brands Communityā€ and ā€œNetwork Attack.ā€ Journal papers were published in Information Discovery and Delivery (2020) and Data Analysis and Knowledge Discovery (2019).
PoemAR: Concepts Learning
This project focused on using Augmented Reality (AR) for situational learning in Chinese ancient poetry. We developed an application depicting scenes from the poem Jiangxue, enriched with music and recitation, aiming to evoke student emotions, a crucial aspect in learning ancient poems. Traditional methods often fragment poems, hindering comprehension and emotional connection. The emphasis was on understanding imagery and fostering aesthetic appreciation in ancient poetry learning.

Fun Facts

Share writing time with my wonderful dog, who appears to be ever-curious.

When not experimenting on AI, I’m experimenting with new recipes in the kitchen.

My workspace is clutter-free.

My dog’s AI Clone.

🐾 You’ve Reached The Tail End šŸ¶ 

See you next time šŸ™‚