| AnglE-LLaMA-13B | 0.8689 | AnglE-optimized Text Embeddings | |
| PromptEOL+CSE+LLaMA-30B | 0.8585 | Scaling Sentence Embeddings with Large Language Models | |
| AnglE-LLaMA-7B-v2 | 0.8579 | AnglE-optimized Text Embeddings | |
| AnglE-LLaMA-7B | 0.8549 | AnglE-optimized Text Embeddings | |
| PromptEOL+CSE+OPT-13B | 0.8534 | Scaling Sentence Embeddings with Large Language Models | |
| PromptEOL+CSE+OPT-2.7B | 0.8480 | Scaling Sentence Embeddings with Large Language Models | |
| PromCSE-RoBERTa-large (0.355B) | 0.8381 | Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning | |
| SimCSE-RoBERTalarge | 0.8236 | SimCSE: Simple Contrastive Learning of Sentence Embeddings | |
| Trans-Encoder-RoBERTa-large-cross (unsup.) | 0.8194 | Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | |
| Trans-Encoder-RoBERTa-large-bi (unsup.) | 0.8176 | Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | |
| Trans-Encoder-BERT-large-bi (unsup.) | 0.8137 | Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | |
| Trans-Encoder-RoBERTa-base-cross (unsup.) | 0.7903 | Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | |
| Trans-Encoder-BERT-base-bi (unsup.) | 0.779 | Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | |
| DiffCSE-BERT-base | 0.7647 | DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings | |
| DiffCSE-RoBERTa-base | 0.7549 | DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings | |
| SBERT-NLI-large | 0.7490000000000001 | Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks | |
| Mirror-RoBERTa-base (unsup.) | 0.732 | Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders | |
| Mirror-BERT-base (unsup.) | 0.713 | Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders | |
| Dino (STSb/̄ | 0.7125 | Generating Datasets with Pretrained Language Models | |
| BERTlarge-flow (target) | 0.6942 | On the Sentence Embeddings from Pre-trained Language Models | |