Homepage
Open in app
Sign in
Get started
AI Weekly
IPPI
AI Biweekly
More
Contribute to Synced Review
Tagged in
Bert
SyncedReview
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
More information
Followers
8.5K
Elsewhere
More, on Medium
Bert
Synced
in
SyncedReview
Nov 24, 2023
ETH Zurich’s UltraFastBERT Realizes 78x Speedup for Language Models
Read more…
64
Synced
in
SyncedReview
Jan 19, 2023
BERT-Style Pretraining on Convnets? Peking U, ByteDance & Oxford U’s Sparse Masked Modelling With Hierarchy Leads the…
Read more…
13
Synced
in
SyncedReview
Mar 30, 2022
CMU & Google Extend Pretrained Models to Thousands of Underrepresented Languages Without Using Monolingual Data
Read more…
2
Synced
in
SyncedReview
Mar 29, 2022
Google, NYU & Maryland U’s Token-Dropping Approach Reduces BERT Pretraining Time by 25%
Read more…
1
Synced
in
SyncedReview
Nov 23, 2021
Microsoft’s DeBERTaV3 Uses ELECTRA-Style Pretraining With Gradient-Disentangled Embedding Sharing to Boost DeBERTa…
Read more…
16
Synced
in
SyncedReview
Nov 18, 2021
Intel’s Prune Once for All Compression Method Achieves SOTA Compression-to-Accuracy Results on BERT
Read more…
11
Synced
in
SyncedReview
Nov 17, 2021
Is BERT the Future of Image Pretraining? ByteDance Team’s BERT-like Pretrained Vision Transformer iBOT Achieves New…
Read more…
65
Synced
in
SyncedReview
May 14, 2021
Google Replaces BERT Self-Attention with Fourier Transform: 92% Accuracy, 7 Times Faster on GPUs
Read more…
275
Synced
in
SyncedReview
May 4, 2021
Huawei & Tsinghua U Method Boosts Task-Agnostic BERT Distillation Efficiency by Reusing Teacher Model Parameters
Read more…
7
1 response
Synced
in
SyncedReview
Apr 28, 2021
Google’s 1.3 MiB On-Device Model Brings High-Performance Disfluency Detection Down to Size
Read more…
10