| PromCSE-RoBERTa-large (0.355B) | 0.8243 | Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning | |
| PromptEOL+CSE+LLaMA-30B | 0.8238 | Scaling Sentence Embeddings with Large Language Models | |
| PromptEOL+CSE+OPT-13B | 0.8206 | Scaling Sentence Embeddings with Large Language Models | |
| SimCSE-RoBERTalarge | 0.8195 | SimCSE: Simple Contrastive Learning of Sentence Embeddings | |
| PromptEOL+CSE+OPT-2.7B | 0.8129 | Scaling Sentence Embeddings with Large Language Models | |
| SentenceBERT | 0.7462 | Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks | |
| SRoBERTa-NLI-base | 0.7446 | Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks | |
| SRoBERTa-NLI-large | 0.7429 | Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks | |
| Dino (STS/̄ | 0.7426 | Generating Datasets with Pretrained Language Models | |
| SBERT-NLI-large | 0.7375 | Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks | |
| SBERT-NLI-base | 0.7291 | Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks | |
| Trans-Encoder-BERT-base-bi (unsup.) | 0.7276 | Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | |
| Trans-Encoder-BERT-large-cross (unsup.) | 0.7192 | Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | |
| Trans-Encoder-RoBERTa-large-cross (unsup.) | 0.7163 | Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | |
| Trans-Encoder-BERT-large-bi (unsup.) | 0.7133 | Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | |
| Mirror-RoBERTa-base (unsup.) | 0.706 | Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders | |
| Mirror-BERT-base (unsup.) | 0.703 | Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders | |
| Trans-Encoder-BERT-base-cross (unsup.) | 0.6952 | Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | |
| Dino (STSb/̄ | 0.6809 | Generating Datasets with Pretrained Language Models | |
| Rematch | 0.6772 | Rematch: Robust and Efficient Matching of Local Knowledge Graphs to Improve Structural and Semantic Similarity | |