Exoplanet Imaging via Differentiable Rendering
Seeing new light from exoplanets.
I am a Postdoctoral Associate at MIT CSAIL, working with Prof. William T. Freeman, and a Visiting Scientist at Harvard-Smithsonian Center for Astrophysics, working with Dr. Cecilia Garraffo.
I develop machine learning algorithms to extract information seemingly invisible and beyond human perception. By pushing the boundaries of computational imaging and computer vision, my work aims to uncover hidden knowledge and accelerate scientific progress by expanding human vision from intuitive to algorithmic.
I completed my Ph.D. in Computer Science at the University of Maryland, advised by Prof. Amitabh Varshney and closely working with Prof. Christopher A. Metzler and Prof. Jia-Bin Huang.
Neural signal representations enable breakthroughs in correcting for severe time-varying wavefront aberrations caused by scattering media.
3D motion magnification allows us to magnify subtle motions in seeamingly static scenes while supporting rendering from novel views.
Physics-based neural signal representations accelerate real-time 3D refocusing in Fourier ptychographic microscopy and overcome barriers to clinical diagnosis.
The only true voyage of discovery would be not to visit strange lands, but to possess other eyes, to behold the universe through the eyes of another.
View interpolation without 3D reconstruction or correspondence.
Remarkable compression rates by representing light fields as neural network weights. Simple and compact formulation also supports angular interpolation to generate novel viewpoints.
Template from Keunhong Park