NeuWS: NeuWS: Neural Wavefront Shaping for Guidestar-Free Imaging Through Static and Dynamic Scattering Media
Neural signal representations enable breakthroughs in correcting for severe time-varying wavefront aberrations caused by scattering media.
I completed my Ph.D. in Computer Science at the University of Maryland, advised by Prof. Amitabh Varshney and closely working with Prof. Chris Metzler and Prof. Jia-Bin Huang. My research interest centers around computational imaging, mid-level vision, and computational photography. I have developed machine learning algorithms for image and 3D data processing, with applications ranging from mixed reality to natural science. My goal is to extend the boundary of visible reality for humans, designing physics-inspired machine learning algorithms that augment and unlock human abilities to perceive and create information.
Neural signal representations enable breakthroughs in correcting for severe time-varying wavefront aberrations caused by scattering media.
View interpolation without 3D reconstruction or correspondence.
Efficient implicit 3D shape representation.
Training GAN only on blurry images from a single scene to recover a sharp image without estimating the blur kernels or acquiring a large labelled dataset.
Further improved compression ability on light fields with implicit neural representations.
Embedding hidden signals in NeRF renderings from arbitrary viewpoints. Opens up possibility of NeRF model and data ownership identification and protection as more and more NeRF models are exchanged and distributed online.
Interesting findings from evaluating AlphaFold2 on protein-protein docking, highlighting areas for future development using deep learning methods.
Remarkable compression rates by representing light fields as neural network weights. Simple and compact formulation also supports angular interpolation to generate novel viewpoints.
Visualizing "who's looking at who" from static profile images as people always turn off their videos. Synthesized eye gazes are obtained by leveraging a general-purpose neural network.
Training neural networks for 360° monocular depth and normal estimation. Proposed a novel approach of combining depth and normal as a double quaternion during loss computation.
Neural networks with multimodal input disproportionately rely on certain modality while ignoring the rest. Developed a new strategy to overcome such bias and adapt to missing modalities.