NYU Center for Data ScienceSemi- and Self-Supervised Learning Help Clinicians Minimize Manual Labeling in Medical Image…CDS study reduces manual labeling in medical imaging by leveraging self-supervised and semi-supervised learning.Jul 5
Varun BhardwajColBERT: A complete guideMe: BERT, can you please find me a Document Retrieval Model? BERT: Yes sure, here is your State Of The Art (SOTA) ColBERT model. Me: What’s…Aug 19, 20222
Mengliu ZhaoinTowards Data ScienceFrom MOCO v1 to v3: Towards Building a Dynamic Dictionary for Self-Supervised Learning — Part 1A gentle recap on the momentum contrast learning frameworkJul 4Jul 4
Dhruv MataniinTowards Data ScienceNT-Xent (Normalized Temperature-Scaled Cross-Entropy) Loss Explained and Implemented in PyTorchAn intuitive explanation of the NT-Xent loss with a step-by-step explanation of the operation and our implementation in PyTorchJun 13, 2023Jun 13, 2023
Priyanshu mauryaAlternative to Cosine Similarity Function for Self-Supervised TasksIn self-supervised training, we use contrastive loss to learn the feature embeddings of structures or text. For contrastive loss, we often…Jun 29Jun 29
NYU Center for Data ScienceSemi- and Self-Supervised Learning Help Clinicians Minimize Manual Labeling in Medical Image…CDS study reduces manual labeling in medical imaging by leveraging self-supervised and semi-supervised learning.Jul 5
Varun BhardwajColBERT: A complete guideMe: BERT, can you please find me a Document Retrieval Model? BERT: Yes sure, here is your State Of The Art (SOTA) ColBERT model. Me: What’s…Aug 19, 20222
Mengliu ZhaoinTowards Data ScienceFrom MOCO v1 to v3: Towards Building a Dynamic Dictionary for Self-Supervised Learning — Part 1A gentle recap on the momentum contrast learning frameworkJul 4
Dhruv MataniinTowards Data ScienceNT-Xent (Normalized Temperature-Scaled Cross-Entropy) Loss Explained and Implemented in PyTorchAn intuitive explanation of the NT-Xent loss with a step-by-step explanation of the operation and our implementation in PyTorchJun 13, 2023
Priyanshu mauryaAlternative to Cosine Similarity Function for Self-Supervised TasksIn self-supervised training, we use contrastive loss to learn the feature embeddings of structures or text. For contrastive loss, we often…Jun 29
Anuj DuttEmerging Properties in Self-Supervised Vision Transformers (DINO) — Paper SummaryOct 31, 20231
Challenge EnthusiastSelf-supervised learning demonstratedSelf-supervised learning is a machine learning approach that enables models to learn from unlabelled data…Jun 29
Hamdi BoukamchaObject Tracking |Review of “Matching Anything by Segmenting Anything”🏋 Strengths:Jun 9