Akira’s ML News #Week49, 2020

Akihiro FUJII
Analytics Vidhya
Published in
6 min readDec 6, 2020

Here are some of the papers and articles that I found particularly interesting I read in week 49 of 2020 (29 November~). I’ve tried to introduce the most recent ones as much as possible, but the date of the paper submission may not be the same as the week.

Topics

  1. Machine Learning Papers
  2. Technical Articles
  3. Examples of Machine Learning use cases
  4. Other topics

— Weekly Editor’s pickup

— Past Articles

Week 48⇦ Week 49(this post) ⇨ Week 50

November 2020 summary
October 2020 summary

September 2020 summary

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

1. Machine Learning Papers

— —

GAN technology keeps images high quality while reducing the amount of video conferencing traffic

One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing
https://arxiv.org/abs/2011.15126

In video conferencing, sending only the keypoint representation of the face to the receiver saves ten times the bandwidth compared to commercial standards. From the key point representation, the receiver side uses the Few-shot Talking Head to reconstruct the face image with high quality.

A model that greatly exceeds the accuracy with a smaller amount of calculations than MobileNetV3

MicroNet: Towards Image Recognition with Extremely Low FLOPs
https://arxiv.org/abs/2011.12289

Proposed Micro-Factorized Convolusion, which uses a diagonal block matrix to achieve grouping of connections between layers while swapping channels like ShuffleNet, and Dymanic Shift-Max, which reinforces the nonlinearity by taking the maximum value across groups. Results that greatly exceed the accuracy with smaller computational complexity than MobileNetV3.

A survey of challenges and methods overcome when applying machine learning to the real world

Challenges in DeployingMachine Learning: a Survey of Case Studies
https://arxiv.org/abs/2011.09926

A survey of real-world machine learning challenges and solutionsin terms of data, learning, evaluation, implementation in applications. It is a collection of papers that delve into each of the challenges and describe how they have been overcome .

A combination of semi-supervised learning and contrastive learning

FROST: Faster and more Robust One-shot Semi-supervised Training
https://arxiv.org/abs/2011.09471

By incorporating the loss of contrast learning into semi-supervised learning, FROST is proposed to make learning faster; comparable to the accuracy of supervised learning at around only 128 epochs and robust to hyperparameter selection.

Style transformation in the process, like a human painting with a brush

Stylized Neural Painting
https://arxiv.org/abs/2011.08114

Instead of constructing a value for each pixel, the study performs a style transformation in a process similar to a human painting with a brush. It is repeated multiple times with brushstrokes using differentiable renderer from a blank canvas and trained as self-supervised manner based on the similarity to the reference.

Improving the convergence speed of Transformer-based object detection models by removing the Decoder

Rethinking Transformer-based Set Prediction for Object Detection
https://arxiv.org/abs/2011.10881

The object detection model DETR using a transformer has a slow convergence (500epochs). They identify the cause of the slow convergence as the decoder, eliminate it, and employ a mechanism such as Fater-RCNN. The convergence is much faster (36epochs~) and the accuracy is improved.

Solving time-dependent partial differential equations in data-driven

DIFFUSIONNET: ACCELERATING THE SOLUTION OF TIME-DEPENDENT PARTIAL DIFFERENTIAL EQUATIONS USING DEEP LEARNING.
https://arxiv.org/abs/2011.10015

Proposed DiffusionNet, a data-driven, time-dependent partial differential equation solver, confirming that there is a speed-up advantage for tasks such as two-dimensional transient heat transfer when space and time are large.

Combining ResNeXt grouping and Selective-Kernel structure

ResNeSt: Split-Attention Networks
https://arxiv.org/abs/2004.08955

Proposed a Split-Attention module that combines ResNeXt like grouping and Selective-Kernel structure. Confirmed improvement in accuracy for image classification, object detection, semantic segmentation, and Instance Segmentation.

Improves accuracy by correcting the misalignment of resolution during training and inference

FIXING THE TRAIN-TEST RESOLUTION DISCREPANCY: FIXEFFICIENTNET
https://arxiv.org/abs/2003.08237

EfficientNet improves accuracy by increasing the resolution, but there is a gap between the resolution during training and inference. By fine-tuning a top layer at a given resolution after training, they filled this gap and achieved better results than NoisyStudent in ImageNet without using external data (SotA).

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

2. Technical Articles

— — — —

Best 10 Machine Learning Papers of 2020

This article presents 10 machine learning papers published in 2020, including GPT-3, EfficientDet, ViT, and others. It not only gives an overview, but also summarizes the core technology and community reaction.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

3. Examples of Machine Learning use cases

— — — —

Leave for a video conference in your pajamas

Embody released “xpression”, a virtual camera app that uses AI to transform a person on camera. You can even let Einstein speak for you. This is useful when you feel like you’re being watched all the time in a videoconference and you get tired of it.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

4. Other Topics

— — — —

DeepMind solves 50-year old unsolved protein folding problem

DeepMind announces AlphaFold2, a vastly improved version of AlphaFold, claiming that AlphaFold2 solves a 50-year old unsolved protein folding problem. The paper and detailed methodology have not yet been published.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

— Past Articles

Week 48⇦ Week 49(this post) ⇨ Week 50

November 2020 summary
October 2020 summary

September 2020 summary

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Twitter, I post one-sentence paper commentary.

https://twitter.com/AkiraTOSEI

--

--