Semantic Textual Similarity On Sick

评估指标

Spearman Correlation

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
PromCSE-RoBERTa-large (0.355B)0.8243Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning
PromptEOL+CSE+LLaMA-30B0.8238Scaling Sentence Embeddings with Large Language Models
PromptEOL+CSE+OPT-13B0.8206Scaling Sentence Embeddings with Large Language Models
SimCSE-RoBERTalarge0.8195SimCSE: Simple Contrastive Learning of Sentence Embeddings
PromptEOL+CSE+OPT-2.7B0.8129Scaling Sentence Embeddings with Large Language Models
SentenceBERT0.7462Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
SRoBERTa-NLI-base0.7446Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
SRoBERTa-NLI-large0.7429Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Dino (STS/̄0.7426Generating Datasets with Pretrained Language Models
SBERT-NLI-large0.7375Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
SBERT-NLI-base0.7291Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Trans-Encoder-BERT-base-bi (unsup.)0.7276Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Trans-Encoder-BERT-large-cross (unsup.)0.7192Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Trans-Encoder-RoBERTa-large-cross (unsup.)0.7163Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Trans-Encoder-BERT-large-bi (unsup.)0.7133Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Mirror-RoBERTa-base (unsup.)0.706Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
Mirror-BERT-base (unsup.)0.703Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
Trans-Encoder-BERT-base-cross (unsup.)0.6952Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Dino (STSb/̄0.6809Generating Datasets with Pretrained Language Models
Rematch0.6772Rematch: Robust and Efficient Matching of Local Knowledge Graphs to Improve Structural and Semantic Similarity
0 of 22 row(s) selected.
Semantic Textual Similarity On Sick | SOTA | HyperAI超神经