NeuWS: Neural Wavefront Shaping for Guidestar-Free Imaging Through Static and Dynamic Scattering Media
Neural signal representations enable breakthroughs in correcting for severe time-varying wavefront aberrations caused by scattering media.
I am a Postdoctoral Associate at MIT CSAIL, working with Prof. William T. Freeman. I completed my Ph.D. in Computer Science at the University of Maryland, advised by Prof. Amitabh Varshney and closely working with Prof. Christopher A. Metzler and Prof. Jia-Bin Huang.
My current research focuses on the interplay between computational imaging, computer vision, and machine learning. I develop physics-informed visual information processing algorithms to reveal, identify, and understand complex phenomena. I am dedicated to pushing the frontiers of visible reality for human and machine vision. My north star is uncovering unseen knowledge crucial for scientific discovery and medical innovation.
Neural signal representations enable breakthroughs in correcting for severe time-varying wavefront aberrations caused by scattering media.
Physics-based neural signal representations accelerate real-time 3D refocusing in Fourier ptychographic microscopy and overcome barriers to clinical diagnosis.
3D motion magnification allows us to magnify subtle motions in seeamingly static scenes while supporting rendering from novel views.
Embedding hidden signals in NeRF renderings from arbitrary viewpoints. Opens up possibility of NeRF model and data ownership identification and protection as more and more NeRF models are exchanged and distributed online.
View interpolation without 3D reconstruction or correspondence.
Efficient implicit 3D shape representation.
Training GAN only on blurry images from a single scene to recover a sharp image without estimating the blur kernels or acquiring a large labelled dataset.
Further improved compression ability on light fields with implicit neural representations.
Interesting findings from evaluating AlphaFold2 on protein-protein docking, highlighting areas for future development using deep learning methods.
Remarkable compression rates by representing light fields as neural network weights. Simple and compact formulation also supports angular interpolation to generate novel viewpoints.
Visualizing "who's looking at who" from static profile images as people always turn off their videos. Synthesized eye gazes are obtained by leveraging a general-purpose neural network.
Training neural networks for 360° monocular depth and normal estimation. Proposed a novel approach of combining depth and normal as a double quaternion during loss computation.
Neural networks with multimodal input disproportionately rely on certain modality while ignoring the rest. Developed a new strategy to overcome such bias and adapt to missing modalities.
Template from Keunhong Park