State of research around Autoencoders in 2023 part7(Machine Learning)

Monodeep Mukherjee
2 min readMar 14, 2023
  1. FlowFormer++: Masked Cost Volume Autoencoding for Pretraining Optical Flow Estimation(arXiv)

Author : Xiaoyu Shi, Zhaoyang Huang, Dasong Li, Manyuan Zhang, Ka Chun Cheung, Simon See, Hongwei Qin, Jifeng Dai, Hongsheng Li

Abstract : FlowFormer introduces a transformer architecture into optical flow estimation and achieves state-of-the-art performance. The core component of FlowFormer is the transformer-based cost-volume encoder. Inspired by the recent success of masked autoencoding (MAE) pretraining in unleashing transformers’ capacity of encoding visual representation, we propose Masked Cost Volume Autoencoding (MCVA) to enhance FlowFormer by pretraining the cost-volume encoder with a novel MAE scheme. Firstly, we introduce a block-sharing masking strategy to prevent masked information leakage, as the cost maps of neighboring source pixels are highly correlated. Secondly, we propose a novel pre-text reconstruction task, which encourages the cost-volume encoder to aggregate long-range information and ensures pretraining-finetuning consistency. We also show how to modify the FlowFormer architecture to accommodate masks during pretraining. Pretrained with MCVA, FlowFormer++ ranks 1st among published methods on both Sintel and KITTI-2015 benchmarks. Specifically, FlowFormer++ achieves 1.07 and 1.94 average end-point error (AEPE) on the clean and final pass of Sintel benchmark, leading to 7.76\% and 7.18\% error reductions from FlowFormer. FlowFormer++ obtains 4.52 F1-all on the KITTI-2015 test set, improving FlowFormer by 0.16.

2.Noise reduction on single-shot images using an autoencoder (arXiv)

Author : Oliver. J. Bartlett, David. M. Benoit, Kevin. A. Pimbblet, Brooke Simmons, Laura Hunt

Abstract : We present an application of autoencoders to the problem of noise reduction in single-shot astronomical images and explore its suitability for upcoming large-scale surveys. Autoencoders are a machine learning model that summarises an input to identify its key features, then from this knowledge predicts a representation of a different input. The broad aim of our autoencoder model is to retain morphological information (e.g., non-parametric morphological information) from the survey data whilst simultaneously reducing the noise contained in the image. We implement an autoencoder with convolutional and maxpooling layers. We test our implementation on images from the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) that contain varying levels of noise and report how successful our autoencoder is by considering Mean Squared Error (MSE), Structural Similarity Index (SSIM), the second-order moment of the brightest 20 percent of the galaxy’s flux M20, and the Gini coefficient, whilst noting how the results vary between the original images, stacked images, and noise reduced images. We show that we are able to reduce noice, over many different targets of observations, whilst retaining the galaxy’s morphology, with metric evaluation on a target by target analysis. We establish that this process manages to achieve a positive result in a matter of minutes, and by only using one single shot image compared to multiple survey images found in other noise reduction techniques

--

--

Monodeep Mukherjee

Universe Enthusiast. Writes about Computer Science, AI, Physics, Neuroscience and Technology,Front End and Backend Development