Link to paper: https://arxiv.org/pdf/1911.08718.pdf
Link to paper: https://arxiv.org/pdf/1710.09829.pdf
Acclaimer: This is a student’s recap in process of my studying. There will be mistakes and misinformations. Be free…
Link to pdf: https://arxiv.org/pdf/1905.05055.pdf
Adopting the ideas of Attention mechanism from sequence modeling (such as text, music), currently Self-attention/Transformer is changing the game of Computer Vision, at its many front battle: Image classification, Object detection, Segmentation.
SPEECH-TO-IMAGE GENERATION via ADVERSARIAL LEARNING
These were the top 10 stories published by XuLab in 2020. You can also dive into monthly archives for 2020 by using the calendar at the top of this page.