Cory DoctorowBackdooring a summarizerbot to shape opinionModel spinning maintains accuracy metrics, but changes the point of view.Oct 21, 20224Oct 21, 20224
YoonwooJeonginTowards AIAre You Sure That You Can Implement Image Classification Networks?Paper Review: Bag of tricks for image classification with convolutional neural networksMar 25, 2022Mar 25, 2022
Leonardo TanziFour Deep Learning papers from late 2021 that will have a significant impact on 2022Presented with short summariesJan 26, 20221Jan 26, 20221
SyncedinSyncedReviewWarsaw U, Google & OpenAI’s Terraformer Achieves a 37x Speedup Over Dense Baselines on 17B…While large-scale transformer architectures have significantly advanced the state-of-the-art on most natural language processing (NLP)…Dec 3, 20211Dec 3, 20211
Mars XianginThe StartupConvolutions: Transposed and DeconvolutionConvolutional Neural Networks are commonly used in computer vision problems. But what about transposed convolution and deconvolution?Jul 17, 20203Jul 17, 20203
Fathy RashadinTowards Data ScienceHow I built an AI Text-to-Art GeneratorA detailed, step-by-swrite-up on how I built Text2Art.comOct 2, 202123Oct 2, 202123
Kabir AhujainThe StartupHow to use Pytorch Dataloaders to work with enormously large text filesPytorch’s Dataset and Dataloader classes provide a very convenient way of iterating over a dataset while training your machine learning…Oct 4, 20195Oct 4, 20195
SyncedinSyncedReviewFacebook AI Proposes Group Normalization Alternative to Batch NormalizationAs Facebook struggles with fallout from the Cambridge Analytica scandal, its research arm today delivered a welcome bit of good news in…Mar 23, 20182Mar 23, 20182
Jesus RodriguezinTowards AIHow DeepMind Train Agents that can Play Any Game Without Human InterventionA new paper proposes a new architecture and training environment for generally capable agents.Aug 3, 20213Aug 3, 20213
Ketan DoshiinTowards Data ScienceTransformers Explained Visually — Not just how, but Why they work so wellA Gentle Guide to how the Attention Score calculations capture relationships between words in a sequence, in Plain English.Jun 2, 202124Jun 2, 202124
Mohammed Terry-JackDeep Learning: The TransformerSequence-to-Sequence (Seq2Seq) models actually contain two models: an Encoder and a Decoder (hence why they are also known as…Jun 23, 20193Jun 23, 20193
Practicus AIinTowards Data ScienceHow to do everything in Computer VisionUsing deep learning magic for computer visionDec 13, 20182Dec 13, 20182