Yunze (Lorenzo) Xiao

Carnegie Mellon University In Qatar.

Yunze Xiao.jpg

2080-O

Education City, Al Luqta St, Ar Rayyan

Doha,Qatar

I am an incoming master student in LTI at Carnegie Mellon University where I am advised by Prof.Mona Diab I also work close with Prof. Dakuo Wang.

Previously,I have been advised by Prof. Houda Bouamor and Prof. Kemal Oflazer in Carengie Mellon University in Qatar. I also had the amazing opportunities to work with Dr. Firoj Alam at QCRI and Prof. Roy Ka-Wei Lee at SUTD.

Broadly, I aim to develop large language models that move beyond surface-level fluency toward genuine human-like intelligence- systems that can think, remember, feel, and interact in socially and cognitively coherent ways. This goal connects three intersecting research directions across NLP, Computational Social Science, and HCI:

  1. Anthropomorphism as a Modeling Dimension: How do training objectives, architectural decisions, and interface designs shape the emergence of human-like traits in LLMs? I study anthropomorphism not just as a risk or illusion, but as a controllable and analyzable design space.

  2. Anthropomorphism for Applictaions: How can human-like attributes—such as emotional resonance, persona consistency, or contextual memory—be used to improve LLM performance in real-world applications like education, therapy, and collaborative writing?

  3. Architectures for Synthetic Human-Likeness: What design innovations (e.g., memory modules, affective simulation, multi-modal grounding) are needed to support truly interactive and situated AI agents? I seek to build systems that engage users as intuitive, emotionally-aware collaborators.

In the end, we are looking for applying these research for authentic AI companion as a potential solution to the global lonliness and disconnection

Feel Free to reach out to me on email or x

news

Sep 28, 2024 Happy to share that I am reviewing for ICWSM 2025 and CSCW 2025!
Sep 28, 2024 Happy to share that our work on Cloaked Offensive Language is accepted by EMNLP 2024!
Jun 19, 2024 Our work on Cloaked Offensive Language is on arxiv now!

selected publications