Sik-Ho TsangBrief Review — An Unsupervised Sentence Embedding Method by Mutual Information MaximizationIS-BERT, Unsupervised Approach Using Mutual Information (MI)Sep 20
YOUSSEF CHAMRAHSetfit unpacked: When Sentence transformers go to ‘classification Gym’Introduction:Jan 13
Sik-Ho TsangBrief Review — SimCSE: Simple Contrastive Learning of Sentence EmbeddingsDropout Approach for Contrastive Learning of Sentence EmbeddingsSep 12Sep 12
Mathias LeysinML6teamThe Art of Pooling Embeddings 🎨In this blogpost we will discover the complexity of pooling that hides behind its apparent simplicity.Jun 20, 20222Jun 20, 20222
Sik-Ho TsangReview — MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of…MiniLM, for Compressing Language ModelSep 8Sep 8
Sik-Ho TsangBrief Review — An Unsupervised Sentence Embedding Method by Mutual Information MaximizationIS-BERT, Unsupervised Approach Using Mutual Information (MI)Sep 20
YOUSSEF CHAMRAHSetfit unpacked: When Sentence transformers go to ‘classification Gym’Introduction:Jan 13
Sik-Ho TsangBrief Review — SimCSE: Simple Contrastive Learning of Sentence EmbeddingsDropout Approach for Contrastive Learning of Sentence EmbeddingsSep 12
Mathias LeysinML6teamThe Art of Pooling Embeddings 🎨In this blogpost we will discover the complexity of pooling that hides behind its apparent simplicity.Jun 20, 20222
Sik-Ho TsangReview — MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of…MiniLM, for Compressing Language ModelSep 8
Joe NyirendaMean Pooling with Sentence TransformersWhen using sentence transformers, you often have to chunk your documents to comply with the sentence transformers' maximum sequence length.May 29
Sik-Ho TsangBrief Review — Multilingual E5 Text Embeddings: A Technical ReportMultilingual E5, Extends English E5 ModelSep 4
Priyanthan GovindarajBuilding Sentence Embeddings with Dual-Encoder: A Comprehensive GuideLearn to Build and Train a Dual-Encoder Model Using BERT from ScratchAug 26