About Me
Hi! I am currently a Postdoc at the University of Hong Kong and a former Research Fellow for the Center of AI Safety. My research lies at the intersection of the philosophy of AI, decision theory, epistemology, and global priorities research. At the University of Hong Kong, I am a Principal Investigator in the AI and Humanity Lab, where my research theme is ``AI in the Extreme". Questions I'm working on include: how should we act in the face of the extreme risks posed by AI? How should we allocate resources if AI can enjoy extreme levels of well-being? And, how confident should we be in a simulation if an AI can sustain extremely large simulated populations? In my related work in global priorities research, I am working on the prospects of developing a ``knowledge-first" decision theory that gives sensible recommendations in the face of gambles involving extremely tiny probabilities of extremely large values.
Aside from my work in AI and Global Priorities Research, I also have research interests in epistemology (both formal and traditional), the philosophy of language, decision theory, and ethics. I have published work on the Preface Paradox, the St. Petersburg Paradox, Moore's Paradox, and have papers on paradoxes in infinite ethics. I like paradoxes.
![Hong Headshot Photo.jpg](https://static.wixstatic.com/media/0d8da8_afc4a952936442b2852b292e71a5baf0~mv2.jpg/v1/fill/w_460,h_460,al_c,q_80,usm_0.66_1.00_0.01,enc_avif,quality_auto/Hong%20Headshot%20Photo.jpg)