Akira’s ML News #Week51, 2020

Akihiro FUJII
Analytics Vidhya
Published in
6 min readDec 20, 2020

Here are some of the papers and articles that I found particularly interesting I read in week 51 of 2020 (13 December~). I’ve tried to introduce the most recent ones as much as possible, but the date of the paper submission may not be the same as the week.

Topics

  1. Machine Learning Papers
  2. Technical Articles
  3. Examples of Machine Learning use cases
  4. Other topics

— Weekly Editor’s pickup

— — — — — — — — — — — — — — — — — — — — — — — — — —

1. Machine Learning Papers

— —

Quality of generated data and quality of training data

A Note on Data Biases in Generative Models
https://arxiv.org/abs/2012.02516

A study (more of a memo) on the effect of dataset on generative models. When the same latent representation was decoded by an Auto Encoder decoder trained for each dataset, the quality of the decoding was different for each dataset. This indicates that not only the model but also the quality of the dataset has an effect on the quality of the generated image.

Avoiding repetitive generation of the same word by considering the probability of occurrence of words

F2-Softmax: Diversifying Neural Text Generation via Frequency Factorized Softmax
https://arxiv.org/abs/2009.09417

There is a problem that the same word is repeated in text generation. Assuming that the problem is caused by the imbalance of text data, they proposed F² softmax, which classifies tokens by frequency and generates words after predicting the frequency class first. This method can generate text with less bias than conventional methods.

Adversarial attacks for medical images

STABILIZED MEDICAL IMAGE ATTACKS
https://openreview.net/forum?id=QfTXQiGYudJ

A proposed method for adversarial attacks on medical images. In addition to the usual adversarial noise-generating loss, we add a term to the loss that minimizes the output of a network of adversarial samples and their Gaussian-smoothed images. There are many image domains in medical imaging, but many domains can be attacked with it.

End-to-End Panoptic Segmentation

MaX-DeepLab: End-to-End Panoptic Segmentation with Mask Transformers
https://arxiv.org/abs/2012.00759

They propose a Max-DeepLab that can learn Panatonic Segmentation in a complete End2End fashion, where 2 paths of Global Memory and Image are integrated by Transformer. They have improved the score significantly with COCO, and have been able to improve the performance (SotA).

※Panatonic Segmentation: A task that combines Instance Segmentation and Semantic Segmentation.

Use global memory to reduce the computational complexity of the Transformer.

GMAT: Global Memory Augmentation for Transformers
https://arxiv.org/abs/2006.03274

They propose a GMAT that is a Transformer with a Global memory. The computational complexity can be reduced from L² to M*(L+M) with a memory length of M and a series length of L. Long sequences can be compressed into a memory representation, and the accuracy is improved for tasks that require global information.

Handling Global Attention of an image by applying Attention to each axis of the image.

Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation
https://arxiv.org/abs/2003.07853

They proposed Axial-Attention, which achieves Global Attention to an image by applying Self-Attention to each of its length and width, while reducing the amount of computation (size⁴→2*size³ in the case of a square). They confirmed its effectiveness in image recognition and segmentation tasks.

Optimize YOLOv3 to include the network architecture.

YOLOv4: Optimal Speed and Accuracy of Object Detection
https://arxiv.org/abs/2004.10934

Research that has significantly modified YOLOv3 with the latest techniques such as Cross-Stage-Partial, Spatial-Pyramid-Pooling, Path-Aggregation Networks, to make it faster and more accurate than EfficientDet. It seems to be a hyper-tuning of YOLOv3 including the architecture and learning methods to select the best existing techniques.

More accurate and faster object detection model than EfficientDet-D7.

Scaled-YOLOv4: Scaling Cross Stage Partial Network
https://arxiv.org/abs/2011.08036

A study to explore the trade-off between resolution, width and depth based on CSPDarknet53, a network of YOLOv4, achieving accuracy and speed beyond EfficietntDet-D7.

— — — — — — — — — — — — — — — — — — — — — — — — — —

2. Technical Articles

— —

Important NeurIPS2020 Papers for Engineers

This is a list of important NeurIPS2020 papers compiled by the author of this blog, with a focus on transformers.

— — — — — — — — — — — — — — — — — — — — — — — — — —

3. Examples of Machine Learning use cases

— —

AI for environmental protection

An article dealing with examples of AI being used to protect the environment. It gives examples of how it has been used for recycling, species conservation, reducing sewage pollution, minimizing food waste, mitigating air pollution, and protecting forests.

Protecting bees with AI.

ApisProtect has announced that it is entering the market for inspection equipment for beekeepers in the US. Bees’ hives have been inspected manually, but this causes stress to the bees. ApisProtect can be used to protect bees from disease and outsourcing without causing disruption to the hive. Since bees are involved in the pollination of various plants, protecting bees will help prevent food prices from skyrocketing.

— — — — — — — — — — — — — — — — — — — — — — — — — —

4. Other Topics

— —

What are the soft skills required in data science?

A thread discussing what kind of soft skills (communication and other abilities besides hard skills like coding and learning a particular program language) are needed in data science. The ability to ask the right questions to customers, etc. are mentioned.

AI Companies by Sector

Articles summarizing leading AI companies by sector, such as productivity improvement, human resources, manufacturing, health care, etc.

— — — — — — — — — — — — — — — — — — — — — — — — — —

— Past Articles

Week 50 ⇦ Week 51(this post) ⇨ Week 52 (unwritten)

November 2020 summary
October 2020 summary

September 2020 summary

— — — — — — — — — — — — — — — — — — — — — — — — — —

Twitter, I post one-sentence paper commentary.

https://twitter.com/AkiraTOSEI

--

--