Changan Chen 陈昌安

I am a postdoctoral researcher at Stanford University hosted by Prof. Fei-Fei Li and Prof. Ehsan Adeli. I received my PhD from UT Austin advised by Prof. Kristen Grauman. I am broadly interested at building machine learning models that perceive the world with multi-modalities and interact with the world. Currently, I work on multimodal perception and generation for 3D scenes and humans.

Previously, I spent five months working with Prof. Andrea Vedaldi and Dr. Natalia Neverova at FAIR, London. I was a visiting researcher at FAIR working with Prof. Kristen Grauman for two years. In my undergrad, I spent a wonderful year working with Prof. Greg Mori on sports video analysis and efficient deep learning, eight months working with Prof. Alexandre Alahi on social navigation in crowds, and eight months working with Prof. Manolis Savva on relational graph reasoning for navigation.

My first name is pronounced as /tʃæn'æn/ with the g being silent.

Research opportunities: I am happy to collaborate with motivated undergrad and master students at Stanford. I am also happy to answer questions about my research. If you are interested, please send me an email.

CV | E-Mail | Google Scholar | Github | Twitter | Dissertation

Photo credit: Jasmin Zhang

News
June 2024 Receiving the Distinguished Paper Award at EgoVis Workshop, CVPR 2024 for SoundSpaces 2.0.
May 2024 Joining Stanford Vision and Learning Lab as a postdoc researcher!
May 2024 I defended my PhD dissertation 4D Audio-Visual Learning: A Visual Perspective of Sound Propagation and Production!
March 2024 We are organizing the first Multimodalities for 3D Scenes (M3DS) workshop at CVPR 2024!
March 2024 We are organizing the fifth Embodied AI workshop at CVPR 2024!
October 2023 Organizing the second AV4D workshop at ICCV 2023!
June 2023 Giving one keynote talk at Ambient AI Workshop, ICASSP23, and one at Sight and Sound Workshop, CVPR23
Feb 2023 Co-organizing Embodied AI Workshop and the 3rd SoundSpaces Challenge at CVPR 2023!
Jan 2023 Co-organizing L3DAS23: Learning 3D Audio Sources for Audio-Visual Extended Reality at ICASSP 2023!
October 2022 We are organizing the first AV4D: Visual Learning of Sounds in Spaces workshop at ECCV 2022!
July 2022 Joining FAIR London for summer internship!
March 2022 I am very honored to receive the 2022 Adobe Research Fellowship!
Feb 2022 Organizing the second SoundSpaces Challenge at the Embodied AI Workshop, CVPR 2022!
Feb 2021 Organizing the first SoundSpaces Challenge at the Embodied AI Workshop, CVPR 2021!
May 2020 Joining Facebook AI Research as a visiting researcher
Selected Publications | All Publications
sym