Xuan (Tan Zhi Xuan)
Pronunciation: ɕɥɛn / sh-yen
I am a 6th year PhD student in the MIT Probabilistic Computing Project and Computational Cognitive Science lab, advised by Vikash Mansinghka and Josh Tenenbaum.
My research sits at the intersection of Bayesian modeling, AI alignment, and cognitive science, asking questions like:
- How can we specify rich yet structured generative models of human reasoning, decision-making, and value formation?1,2,3,4,5
- How can we perform Bayesian inference over such models, in order to accurately learn human goals1,6,7, values, and norms4,8?
- How can we build AI systems that act reliably in accordance with these inferred goals3 and norms4?
To answer these questions, my work includes the development of infrastructure for probabilistic programming9,10 and model-based planning11,12, so as to enable fast and flexible Bayesian inference over complex models of agents and their environments. By developing engineering platforms for more auditable AI systems with stronger algorithmic guarantees, I hope to support the growth of well-founded and human-compatible AI.
I am committed to promoting diversity, equity, inclusion, and justice (DEIJ) in computer science, and to helping steer the development of AI towards beneficial and equitable outcomes for all. To those ends, I help to organize DEIJ groups such as THRIVE at MIT and Julia Gender Inclusive, and serve as a community council member of the Collective Intelligence Project.
I publish under the name “Tan Zhi-Xuan”. This means an APA citation should look like (Zhi-Xuan, YYYY). “Tan” is my surname (transliterated from Hokkien), “Zhi Xuan” is my given name (in Mandarin / pinyin), and “Xuan” (pronounced ɕɥɛn / sh-yen) is the name I usually go by.
mentoring & teaching
I am fortunate to have worked with and mentored many Masters and junior PhD students over the course of my PhD, including Jordyn Mann (on neurosymbolic goal inference13), Gloria Lin (on active structure learning for Gaussian Processes14), Jovana Kondic (on inverse motion planning15), and Lance Ying (on integrating large language models with Bayesian theory-of-mind16).
I have also served as a mentor for the PIBBSS Summer Fellowship from 2022 to 2024, working with Zachary Peck, Mel Andrews, Ninell Oldenburg, and AgustĂn Martinez Suñé on a range of topics across philosophy of AI, social norm learning4, and safety guarantees for LLM-based agents.
In Spring 2022, I was a teaching assistant for the graduate seminar on Bayesian Modeling and Inference taught by Tamara Broderick. As a TA, I supported students in understanding and presenting papers on variational inference, MCMC methods, and Bayesian non-parametrics, while providing guidance and debugging support for their course projects.
recent news
Aug 30, 2024 | Our position paper, Beyond Preferences in AI Alignment, is out after 2 years in the making! |
---|---|
Aug 9, 2024 | I co-organized and served as a panelist for the inaugural RL Safety Workshop at RLC 2024. |
Jun 16, 2024 | I gave an invited talk on Human-Aligned Language Agents via Cooperative Language-Guided Inverse Planning at the 8th Annual CHAI Workshop. |
Jun 16, 2024 | I gave an invited talk on going Beyond Preferences in AI Alignment at the Rio Sociotechnical AI Safety Workshop, co-located with FAccT 2024. |
May 27, 2024 | I gave a webinar on Pluralism in AI Alignment, hosted by ALIGN, the AI Alignment Network in Japan, based on prior work on Contractualist AI Alignment. |
May 22, 2024 | I received the 2024 Paul L. Penfield Student Service Award for my contributions to fostering diversity, equity and inclusion in the MIT EECS department. |
May 15, 2024 | I gave a talk on Towards Reliable AI Assistants via Probabilistic Programming, at AI Wednesdays, a space for AI builders and engineers in the Singaporean civil service. |
May 10, 2024 | I contributed to a position paper on moving Towards Guaranteed Safe AI, writing sections on the role of probabilistic programming and model-based planning in improving AI safety. |