Qingxiao Zheng (Job search now)

Postdoctoral Associate

Institute for Artificial Intelligence and Data Science

Department of CS & Engineering

University at Buffalo, SUNY

📧  qingxiao[at]buffalo[dot]edu

📍 Buffalo, NY

[Google Scholar] [X/Twitter] [LinkedIn]

Hello, beautiful people! 

 

I’m Qingxiao, and I believe technology should amplify human potential, not replace it. As a researcher, designer, and innovator specializing in human-centered AI, I‘m committed to pioneering responsible AI systems that democratize emerging technologies and drive meaningful societal impact.

 

My research interests are:

  • Human-AI interaction
  • Human-centered AI
  • AI systems design and evaluation
  • Responsible AI for Social Good
 

My background spans academia and industry: hold a PhD in Information Sciences at the University of Illinois Urbana-Champaign (UIUC), served as Director of Data Science at a B2B AI company, and received academic training in social science theories and methods at the Chinese University of Hong Kong (CUHK).  

 

“See, Feel, Think, Act” is my simple formula for navigating life’s journey. I’m excited to collaborate on research that bridges theory and practice, centers human needs, and creates AI systems that truly serve communities.

LATEST NEWS

  • Jan 27, 2026: Excited to share that our paper on AI simulation for law enforcement received the 🏆 “Best Paper Award” at IEEE AIxVR’26.
  • Jan 15, 2026: Three papers accepted by CHI’26. Shout-out to my amazing collaborators!
  • Jan 15, 2026: One paper accepted by the journal Computers and Education: Artificial Intelligence.
  • Jan 12, 2026: Kicked off the new year with a paper I mentored on the multimodal interface accepted at IUI’26.
  • Dec 22, 2025: One paper accepted by Frontiers in Education focusing on speech-language development in individuals with special needs.
  • Dec 4, 2025: Gave a talk at UB on research in designing AI systems for education, thanks to Prof. Christopher Hoadley for the invitation.
  • Nov 20-22, 2025: I’ll be in D.C. for ASHA’25 to present our research progress. Happy to connect.
  • Nov 11, 2025: One paper accepted by AIxVR’26, another work showcasing our efforts in advancing human–AI interaction.
  • Nov 8, 2025: One session and one individual paper accepted by AERA’26, many thanks to my amazing collaborators!
  • Oct 20, 2025: I’ll be in Las Vegas for AECT’25. Feel free to say hi if you’re around.
  • Aug 12, 2025: I’ll be traveling to Chicago to attend AIVO. Happy to connect while I’m there.
  • May 16, 2025: One paper on parent-child interaction accepted by AECT’25
  • May, 2025: I will serve as the AC for CHI 2026.
  • May 1, 2025: One paper on state space model accepted by ICML’25
  • April 29, 2025: One paper on automating intervention discovery accepted by IJCAI’25.
  • April 3, 2025: Three papers accepted by AIED’25, celebrating the interdisciplinary teamwork that brought these ideas to life.
  • Jan 16, 2025:  Two papers accepted by CHI’25, many thanks to my amazing collaborators for their support.
  • Jan 2025: Welcoming the new year with a paper on AI clones accepted by Computers in Human Behaviors: Artificial Humans.
  • Aug 2024: To host a Special Interest Group (SIG) on “The Responsible Use of Large Multi-modal AI for Human Behavior Analysis” at CSCW’24. 
  • Aug 2024: Interaction design Soap.AI accepted for the CSCW’24 Demo.
  • July 2024: To present at the 2024 ASHA Convention for using LMM for special education in Seattle.
  • May, 2024: I will serve as the AC for CHI 2025.
  • May 2024: Dissertation successfully defended — officially Dr. Zheng!
  • Mar 2024: Presented research on large multi-modal models for special education at the University at Buffalo (SUNY).
  • Apr 2024: Ranked “Teaching Excellent” on UIUC’s Fall 2023 List of Teachers.
  • Feb 2024: Gave a talk at Indiana University Bloomington on research in human-AI interaction, thanks to Prof. Susan Herring for the invitation.
  • Sept 2023:  Presented at CSCW’23 in Minnesota on Safety Risks in Social VR. 

Publications: [Google Scholar]

Research highlights

#1 Research on AI Systems Design and Evaluation

[CHI'25] EvalignUX: Supporting AI System Creators in Design Evaluation

Evaluating UX in the context of AI’s complexity, unpredictability, and generative nature presents unique challenges. How can we support HCI researchers to create comprehensive UX evaluation plans? In this paper, we introduce EvAlignUX, a system powered by large language models and grounded in scientific literature, designed to help HCI researchers explore evaluation metrics and their relationship to research outcomes. In a world where experience defines impact, we discuss the importance of shifting UX evaluation from a “method-centric” to a “mindset-centric” approach as the key to meaningful and lasting design evaluation.

[CHI'22] Lit Review: UX Evaluation Framework for Human–AI Interaction

This paper asks a core design and evaluation question: when and how should AI engage in human interaction, and how do dyadic versus polyadic configurations shape user experience? As conversational agents move from one-on-one interactions to mediating multi-party social dynamics, they introduce distinct UX challenges around coordination, social roles, and interpretability. We synthesize UX-focused ACM studies to establish a design-oriented understanding of how different interaction configurations affect human experience and inform future human-AI interaction system evaluation approach.

[CHI'23 LBW] Case Study: Bringing UX Metrics into Design Process

Participatory Design (PD) aims to empower users by involving them in various design decisions. However, it was found that the PD’s evaluation criteria are usually set by the product team and used only at the end of a design process without adequate user participation. To address this issue, we proposed introducing UX evaluation metrics into design materials at the participatory design INPUT phase. 

[CHI'26] Speculative Design for AI–Biohybrid Systems: Debating Possible Futures

This artistic collaborative project introduces a tangible design fiction in which participants take part in a speculative 2052 dining experience that involves consuming biohybrid flying robots, using multisensory performance and ritual to probe how people reason about future food technologies. The study shows that embodied, ambiguous encounters surface nuanced ethical, cultural, and affective negotiations around edibility, sentience, and sustainability, advancing HCI methods for evaluating speculative AI–biohybrid systems beyond abstract discussion.

#2 Design, Build, and Evaluate Domain-Specific AI Systems
Screenshot 2026-01-30 at 1.19.54 AM
Screenshot 2026-01-30 at 1.33.09 AM

[Series of Work] AI Systems for Speech-Language Services

In speech-language service and care contexts, my research examines how AI systems can be responsibly designed and evaluated to support speech-language pathologists, caregivers, and children across clinical, home, and training settings. Rather than optimizing for automation or performance alone, this program of work investigates how expertise-aligned AI shapes trust, confidence, emotional engagement, and professional judgment in speech-language services. This research is primarily supported by the NSF National AI Institute for Exceptional Education and has most recently received a 2025 research grant from the Organization for Autism Research (Co-PI).

[CHI'26] AI for Law Enforcement Trauma-Informed Interviews

In the law enforcement context, this collaborative project examines what is gained and lost when high-stakes, trauma-informed communication training shifts from live, actor-based role-play to AI-based simulations. Through a mixed-methods study with police recruits, it shows that AI is most effective when strategically sequenced with human simulations, functioning as a complementary scaffold that reshapes emotional engagement, preparedness, and learning rather than replacing human realism.

[CHI'26] AI for Medical Simulation

In the medical training context, this collaborative project investigates when an AI facilitator should proactively intervene versus remain user-initiated in immersive medical training, through a mixed-methods study comparing proactive and on-demand AI guidance in an XR lumbar puncture simulator. While learning outcomes were similar, qualitative findings reveal that the perceived value of AI proactivity depends on task phase, cognitive load, and learner preference, leading to a boundary framework for calibrating AI intervention in high-stakes, cognitively demanding training environments.

[AIxVR'26] AI for Law Enforcement De-Escalation Training

In the law enforcement context, we present EMSIM, an AI-driven VR system for de-escalation training that integrates an LLM-based, rubric-guided empathy evaluator to dynamically shape scenario progression and provide real-time, feedback-rich reflection for both trainees and instructors. Through mixed-methods evaluation with law enforcement trainees and trainers, the study demonstrates how AI-enabled XR systems can support exploratory learning and system-level evaluation of communication strategies, while surfacing key design tradeoffs around controllability, feedback timing, and realism in high-stakes human–AI training systems.

Best Paper Award 🏆

[CHI'25] AI for Public Library Service

This empirical study serves as a primer for interested service providers to determine if and how Large Language Model (LLM) technology will be integrated for their practitioners and the broader community. The insights pave the way for synergistic and ethical human-AI co-creation in service contexts.

[Computers in Human Behavior: Artificial Humans (2025)] Learning from One's Self-Clone?

This study examines the impact of AI-generated digital clones with self-images (AI-clones) on enhancing perceptions and skills in online presentations. A mixed-design experiment with 44 international students compared self-recording videos (self-recording group) to AI-clone videos (AI-clone group) for online English presentation practice. 

#3 Understand and Innovate AI-Mediated Social Interactions
Safety Risks
Safety Risks
Behaviors
Protection Mechanisms
Design Implications

[CSCW'23] Avatar-Based Social Interaction: Risk Behavioral Cues

RQ:  How do people (e.g., victims, attackers, bystanders, or spectators) respond to safety risks posed by virtual avatars, and what are the design implications for avatar-based human-human interactions?

Understanding emerging safety risks in nuanced social VR spaces and how existing safety features are used is crucial for the future development of safe and inclusive 3D social worlds. Prior research on safety risks in social VR is mainly based on interview or survey data about social VR users’ experiences and opinions, which lacks “in-situ observations” of how individuals react to these risks. Using two empirical studies, this paper seeks to understand safety risks and safety design in social VR. In Study 1, we investigated 212 YouTube videos and their transcripts that document social VR users’ immediate experiences of safety risks as victims, attackers, or bystanders. We also analyzed spectators’ reactions to these risks shown in comments to the videos. In Study 2, we summarized 13 safety features across various social VR platforms and mapped how each existing safety feature in social VR can mitigate the risks identified in Study 1. Based on the uniqueness of social VR interaction dynamics and users’ multi-modal simulated reactions, we call for further rethinking and reapproaching safety designs for future social VR environments and propose potential design implications for future safety protection mechanisms in social VR.

[CSCW'21] Navigating Social Boundaries: Chatbot-Mediated Communication

RQ: How can we design a chatbot to mediate emotional communication?

Many couples experience long-distance relationships (LDRs), and “couple technologies” have been designed to influence certain relational practices or maintain them in challenging situations. Chatbots show great potential in mediating people’s interactions. However, little is known about whether and how chatbots can be desirable and effective for mediating LDRs. In this paper, we conducted a two-phase study to design and evaluate a chatbot, PocketBot, that aims to provide effective interventions for LDRs. In Phase I, we adopted an iterative design process by conducting need-finding interviews to formulate design ideas and piloted the implemented PocketBot with 11 participants. In Phase II, we evaluated PocketBot with eighteen participants (nine LDR couples) in a week-long field trial followed by exit interviews, which yielded empirical understandings of the feasibility, effectiveness, and potential pitfalls of using PocketBot. First, a knock-on-the door feature allowed couples to know when to resume an interaction after evading a conflict; this feature was preferred by certain participants (e.g., participants with stoic personalities). Second, a humor feature was introduced to spice up couples’ conversations. This feature was favored by all participants, although some couples’ perceptions of the feature varied due to their different cultural or language backgrounds. Third, a deep talk feature enabled couples at different relational stages to conduct opportunistic conversations about sensitive topics for exploring unknowns about each other, which resulted in surprising discoveries between couples who have been in relationships for years. Our findings provide inspirations for future conversational-based couple technologies that support emotional communication.

Past Projects (Industry)

AI Engine: Jove Arch
JoveArch is an AI-based financial platform designed to provide secure, multi-algorithm modeling services in the cloud. As a next-generation quantitative computing engine, JoveArch employs enhanced analysis, artificial intelligence, and data governance strategies and can be applied to investment transactions, commodity research, and industry regulation.
Date Service: Yee Sight
YeeSight is a data platform providing one-and-for-all information processing solutions for in-depth analysis of cross-language texts. It leverages a global multilingual text database & social media) with hundreds of billions of entries to build multilingual NLP algorithms for word segmentation, part-of-speech tagging, named entity recognition, sensitivity analysis, sentiment analysis, automatic summarization, key word extraction, text classification, text quality assessment, hotspot clustering, event element extraction, and knowledge graph building.
AI Editing: Smart News Base
Smart News Base is an AI-driven data platform that provides one-stop news editing services for journalists. Its intelligent writing feature is supported by large-scale news and cross-language social media sources. Leveraging neural network machine translation, it also aids editors in global news reporting. "Feng Mian News" is using it!
Global Risk Report
The Global Risk research project transformed unstructured data from around the world into structured data using AI technologies such as data mining, text mining, and machine learning. Through early corpus tagging, we subdivided major global risks from June 2017 to May 2018 into 30 subclasses according to the World Economic Forum, established the data model of global risk, and conducted network analysis. In our analysis, we illustrated probabilities and forecasted the potential influences in the coming years. A report was released at "Rise of Data Democracy," the 2018 World Economic Forum Summer Davos in Tianjin.
Data Analysis: Social Media
This project explored information dissemination in mobile social networks. The methods used could be beneficial for organizations seeking to analyze and identify problems in online public opinion dissemination. Two topics were studied: “Virtual Reality Brands Community” and “Network Attack.” Journal papers were published in Information Discovery and Delivery (2020) and Data Analysis and Knowledge Discovery (2019).
PoemAR: Concepts Learning
This project focused on using Augmented Reality (AR) for situational learning in Chinese ancient poetry. We developed an application depicting scenes from the poem Jiangxue, enriched with music and recitation, aiming to evoke student emotions, a crucial aspect in learning ancient poems. Traditional methods often fragment poems, hindering comprehension and emotional connection. The emphasis was on understanding imagery and fostering aesthetic appreciation in ancient poetry learning.

Fun Facts

Share writing time with my wonderful dog, who appears to be ever-curious.

When not experimenting on AI, I’m experimenting with new recipes in the kitchen.

My workspace is clutter-free.

My dog’s AI Clone.

🐾 You’ve Reached The Tail End 🐶 

See you next time 🙂