Brandon Y. Feng

I am a Postdoctoral Associate at MIT CSAIL, working with Prof. William T. Freeman. I completed my Ph.D. in Computer Science at the University of Maryland, advised by Prof. Amitabh Varshney and closely working with Prof. Christopher A. Metzler and Prof. Jia-Bin Huang.
My current research focuses on the interplay between computational imaging, computer vision, and machine learning. I develop physics-informed visual information processing algorithms to reveal, identify, and understand complex phenomena. I am dedicated to pushing the frontiers of visible reality for human and machine vision. My north star is uncovering unseen knowledge crucial for scientific discovery and medical innovation.

Publications

NeuWS: Neural Wavefront Shaping for Guidestar-Free Imaging Through Static and Dynamic Scattering Media

NeuWS: Neural Wavefront Shaping for Guidestar-Free Imaging Through Static and Dynamic Scattering Media

Neural signal representations enable breakthroughs in correcting for severe time-varying wavefront aberrations caused by scattering media.

FPM-INR: Fourier Ptychographic Microscopy Image Stack Reconstruction Using Implicit Neural Representations

FPM-INR: Fourier Ptychographic Microscopy Image Stack Reconstruction Using Implicit Neural Representations

Optica, 2023 11.8% acceptance rate

Physics-based neural signal representations accelerate real-time 3D refocusing in Fourier ptychographic microscopy and overcome barriers to clinical diagnosis.

3D Motion Magnification: Visualizing Subtle Motions with Time-Varying Neural Fields

3D Motion Magnification: Visualizing Subtle Motions with Time-Varying Neural Fields

3D motion magnification allows us to magnify subtle motions in seeamingly static scenes while supporting rendering from novel views.

StegaNeRF: Embedding Invisible Information within Neural Radiance Fields

Embedding hidden signals in NeRF renderings from arbitrary viewpoints. Opens up possibility of NeRF model and data ownership identification and protection as more and more NeRF models are exchanged and distributed online.

VIINTER: View Interpolation with Implicit Neural Representations of Images

VIINTER: View Interpolation with Implicit Neural Representations of Images

View interpolation without 3D reconstruction or correspondence.

PRIF: Primary Ray-based Implicit Function

PRIF: Primary Ray-based Implicit Function

Efficient implicit 3D shape representation.

TurbuGAN: An Adversarial Learning Approach to Spatially-Varying Multiframe Blind Deconvolution with Applications to Imaging Through Turbulence

TurbuGAN: An Adversarial Learning Approach to Spatially-Varying Multiframe Blind Deconvolution with Applications to Imaging Through Turbulence

Training GAN only on blurry images from a single scene to recover a sharp image without estimating the blur kernels or acquiring a large labelled dataset.

Neural Subspaces for Light Fields

Neural Subspaces for Light Fields

Brandon Y. Feng, Amitabh Varshney

Further improved compression ability on light fields with implicit neural representations.

Benchmarking AlphaFold for Protein Complex Modeling Reveals Accuracy Determinants

Benchmarking AlphaFold for Protein Complex Modeling Reveals Accuracy Determinants

Interesting findings from evaluating AlphaFold2 on protein-protein docking, highlighting areas for future development using deep learning methods.

SIGNET: Efficient Neural Representations for Light Fields

SIGNET: Efficient Neural Representations for Light Fields

Brandon Y. Feng, Amitabh Varshney

Remarkable compression rates by representing light fields as neural network weights. Simple and compact formulation also supports angular interpolation to generate novel viewpoints.

GazeChat: Enhancing Virtual Conferences with Gaze-aware 3D Photos

GazeChat: Enhancing Virtual Conferences with Gaze-aware 3D Photos

Zhenyi He, Keru Wang, Brandon Y. Feng, Ruofei Du, Ken Perlin

Visualizing "who's looking at who" from static profile images as people always turn off their videos. Synthesized eye gazes are obtained by leveraging a general-purpose neural network.

Deep Depth Estimation on 360-Degree Images with a Double Quaternion Loss

Deep Depth Estimation on 360-Degree Images with a Double Quaternion Loss

Training neural networks for 360° monocular depth and normal estimation. Proposed a novel approach of combining depth and normal as a double quaternion during loss computation.

A Self-Adaptive Network for Multiple Sclerosis Lesion Segmentation from Multi-Contrast MRI with Various Imaging Sequences

A Self-Adaptive Network for Multiple Sclerosis Lesion Segmentation from Multi-Contrast MRI with Various Imaging Sequences

Brandon Y. Feng, Huitong Pan, Craig H. Meyer, Xue Feng

Neural networks with multimodal input disproportionately rely on certain modality while ignoring the rest. Developed a new strategy to overcome such bias and adapt to missing modalities.


Template from Keunhong Park