NeuWS: Neural Wavefront Shaping for Guidestar-Free Imaging Through Static and Dynamic Scattering Media
Neural signal representations enable breakthroughs in correcting for severe time-varying wavefront aberrations caused by scattering media.
I am a Postdoctoral Associate at MIT CSAIL, working with Prof. William T. Freeman. I completed my Ph.D. in Computer Science at the University of Maryland, advised by Prof. Amitabh Varshney and closely working with Prof. Christopher A. Metzler and Prof. Jia-Bin Huang.
My current research focuses on the interplay between computational imaging, computer vision, and machine learning. I develop physics-informed information processing algorithms to reveal and understand phenomena beyond the current capabilities of human and machine vision. My north star is uncovering unseen knowledge crucial for scientific discovery and innovation.
Neural signal representations enable breakthroughs in correcting for severe time-varying wavefront aberrations caused by scattering media.
Physics-based neural signal representations accelerate real-time 3D refocusing in Fourier ptychographic microscopy and overcome barriers to clinical diagnosis.
3D motion magnification allows us to magnify subtle motions in seeamingly static scenes while supporting rendering from novel views.
The only true voyage of discovery would be not to visit strange lands, but to possess other eyes, to behold the universe through the eyes of another.
View interpolation without 3D reconstruction or correspondence.
Efficient implicit 3D shape representation.
Remarkable compression rates by representing light fields as neural network weights. Simple and compact formulation also supports angular interpolation to generate novel viewpoints.
Template from Keunhong Park