Xuan (Tan Zhi Xuan)

Assistant Professor, NUS Computer Science

headshot.jpg

I am an Assistant Professor in the National University of Singapore’s Department of Computer Science with a joint appointment at the A*STAR Institute of High Performance Computing (IHPC).

I run the Cooperative Intelligence & Systems (CoSI) lab, which is focused on scaling cooperative intelligence via rational, model-based AI engineering.

I am actively recruiting PhD students, post-doctoral researchers, and research assistants — see the recruiting page for details!

Previously, I completed my PhD in the the MIT Probabilistic Computing Project and Computational Cognitive Science lab, advised by Vikash Mansinghka and Josh Tenenbaum.

I publish under the name “Tan Zhi-Xuan”. This means an APA citation should look like (Zhi-Xuan, YYYY). “Tan” is my surname (transliterated from Hokkien), “Zhi Xuan” is my given name (in Mandarin / pinyin), and “Xuan” (pronounced ɕɥɛn / sh-yen) is the name I usually go by. I use they/them or she/her pronouns.

research interests

My research on cooperative intelligence sits at the intersection of Bayesian modeling, AI alignment, and cognitive science, asking questions like:

  • How can we specify rich yet structured generative models of human reasoning, decision-making, and value formation?1,2,3,4,5
  • How can we efficiently perform Bayesian inference over such models, in order to accurately learn human goals1,6,7, values, and norms4,8?
  • How can we reverse engineer the computational foundations of human cooperation and human normativity4,8?
  • How can we engineer AI systems using these foundations, so that that act reliably in accordance with human goals3 and norms4?

To answer these questions, my work includes the development of infrastructure for probabilistic programming9,10 and model-based planning11,12, so as to enable fast and flexible Bayesian inference over complex models of agents and their environments. By developing engineering platforms for more auditable AI systems with stronger algorithmic guarantees, I hope to support the growth of well-founded and human-compatible AI.

I see the ultimate goal of this research as steering the development and deployment of AI towards beneficial and equitable outcomes for all, despite our plural and often divergent values. For more on my views regarding AI alignment, safety, and the importance of cooperation in an increasingly automated future, see my talks and interviews on contractualist AI alignment and the limitations of preference-based alignment. I also serve in advisory role to several AI alignment non-profits (PIBBSS; Meaning Alignment Institute).

mentoring & teaching

If you are interested in working with me, please see my recruiting page!

I am fortunate to have worked with and mentored many Masters and junior PhD students over the course of my research career, including Jordyn Mann (on neurosymbolic goal inference13), Gloria Lin (on active structure learning for Gaussian Processes14), Jovana Kondic (on inverse motion planning15), and Lance Ying (on integrating large language models with Bayesian theory-of-mind16).

I have also served as a mentor for the PIBBSS Summer Fellowship from 2022 to 2024, working with Zachary Peck, Mel Andrews, Ninell Oldenburg, and Agustín Martinez Suñé on a range of topics across philosophy of AI, social norm learning4, and safety guarantees for LLM-based agents.

In Fall 2025, I taught a graduate seminar on Rational Approaches to Cooperative Intelligence. In Spring 2022, I was a teaching assistant for the graduate seminar on Bayesian Modeling and Inference taught by Tamara Broderick.

I am committed to promoting diversity, equity, inclusion, and justice (DEIJ) in computer science. To that end, I have been an organizer for groups such as Julia Gender Inclusive and THRIVE @ MIT EECS. If you are an underrepresented student in computer science looking for support or advice, feel free to reach out!

recent news

May 25, 2025 I will be starting as a Presidential Young Professor (Assistant Professor) at NUS Computer Science in August 2025.
Feb 12, 2025 I gave invited seminars on Scaling Cooperative Intelligence via Inverse Planning and Probabilistic Programming at NTU CCDS and NUS School of Computing.
Oct 16, 2024 I gave an invited talk at the Simons Institute Workshop on Alignment, Trust, Watermaking and Copyright Issues in LLMs on Beyond Preferences in AI Alignment.
Aug 30, 2024 Our position paper, Beyond Preferences in AI Alignment, is out after 2 years in the making!
Aug 9, 2024 I co-organized and served as a panelist for the inaugural RL Safety Workshop at RLC 2024.
Jun 16, 2024 I gave an invited talk on Human-Aligned Language Agents via Cooperative Language-Guided Inverse Planning at the 8th Annual CHAI Workshop.
Jun 16, 2024 I gave an invited talk on going Beyond Preferences in AI Alignment at the Rio Sociotechnical AI Safety Workshop, co-located with FAccT 2024.
May 27, 2024 I gave a webinar on Pluralism in AI Alignment, hosted by ALIGN, the AI Alignment Network in Japan, based on prior work on Contractualist AI Alignment.