metadata
language:
- en
license: apache-2.0
tags:
- biencoder
- sentence-transformers
- text-classification
- sentence-pair-classification
- semantic-similarity
- semantic-search
- retrieval
- reranking
- generated_from_trainer
- dataset_size:8000000
- loss:ArcFaceInBatchLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: >-
"How much would I need to narrate a ""Let's Play"" video in order to make
money from it on YouTube?"
sentences:
- How much money do people make from YouTube videos with 1 million views?
- >-
"How much would I need to narrate a ""Let's Play"" video in order to
make money from it on YouTube?"
- '"Does the sentence, ""I expect to be disappointed,"" make sense?"'
- source_sentence: '"I appreciate that.'
sentences:
- >-
"How is the Mariner rewarded in ""The Rime of the Ancient Mariner"" by
Samuel Taylor Coleridge?"
- '"I appreciate that.'
- I can appreciate that.
- source_sentence: >-
"""It is very easy to defeat someone, but too hard to win some one"". What
does the previous sentence mean?"
sentences:
- '"How can you use the word ""visceral"" in a sentence?"'
- >-
"""It is very easy to defeat someone, but too hard to win some one"".
What does the previous sentence mean?"
- >-
"What does ""The loudest one in the room is the weakest one in the
room."" Mean?"
- source_sentence: >-
" We condemn this raid which is in our view illegal and morally and
politically unjustifiable , " London-based NCRI official Ali Safavi told
Reuters by telephone .
sentences:
- >-
London-based NCRI official Ali Safavi told Reuters : " We condemn this
raid , which is in our view illegal and morally and politically
unjustifiable . "
- >-
The social awkwardness is complicated by the fact that Marianne is a
white girl living with a black family .
- art's cause, this in my opinion
- source_sentence: >-
"If you click ""like"" on an old post that someone made on your wall yet
you're no longer Facebook friends, will they still receive a
notification?"
sentences:
- >-
"Is there is any two wheeler having a gear box which has the feature
""automatic neutral"" when the engine is off while it is in gear?"
- >-
"If you click ""like"" on an old post that someone made on your wall yet
you're no longer Facebook friends, will they still receive a
notification?"
- >-
"If your teenage son posted ""La commedia e finita"" on his Facebook
wall, would you be concerned?"
datasets:
- redis/langcache-sentencepairs-v2
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_precision@1
- cosine_recall@1
- cosine_ndcg@10
- cosine_mrr@1
- cosine_map@100
model-index:
- name: Redis fine-tuned BiEncoder model for semantic caching on LangCache
results:
- task:
type: custom-information-retrieval
name: Custom Information Retrieval
dataset:
name: test
type: test
metrics:
- type: cosine_accuracy@1
value: 0.6162
name: Cosine Accuracy@1
- type: cosine_precision@1
value: 0.6162
name: Cosine Precision@1
- type: cosine_recall@1
value: 0.5987
name: Cosine Recall@1
- type: cosine_ndcg@10
value: 0.7883
name: Cosine Ndcg@10
- type: cosine_mrr@1
value: 0.6162
name: Cosine Mrr@1
- type: cosine_map@100
value: 0.7416
name: Cosine Map@100
Redis fine-tuned BiEncoder model for semantic caching on LangCache
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2 on the LangCache Sentence Pairs (all) dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for sentence pair similarity.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("redis/langcache-embed-v3-small")
# Run inference
sentences = [
'"If you click ""like"" on an old post that someone made on your wall yet you\'re no longer Facebook friends, will they still receive a notification?"',
'"If you click ""like"" on an old post that someone made on your wall yet you\'re no longer Facebook friends, will they still receive a notification?"',
'"If your teenage son posted ""La commedia e finita"" on his Facebook wall, would you be concerned?"',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
Evaluation
Metrics
Custom Information Retrieval
- Dataset:
test - Evaluated with
ir_evaluator.CustomInformationRetrievalEvaluator
| Metric | Value |
|---|---|
| cosine_accuracy@1 | 0.6162 |
| cosine_precision@1 | 0.6162 |
| cosine_recall@1 | 0.5987 |
| cosine_ndcg@10 | 0.7883 |
| cosine_mrr@1 | 0.6162 |
| cosine_map@100 | 0.7416 |
Training Details
Training Dataset
LangCache Sentence Pairs (all)
- Dataset: LangCache Sentence Pairs (all)
- Size: ~8,000,000 training samples
- Columns:
anchor,positive, andnegative - Loss:
losses.ArcFaceInBatchLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false }
Evaluation Dataset
LangCache Sentence Pairs (all)
- Dataset: LangCache Sentence Pairs (all)
- Columns:
anchor,positive, andnegative - Loss:
losses.ArcFaceInBatchLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false }
Training Logs
| Epoch | Step | test_cosine_ndcg@10 |
|---|---|---|
| 4.0 | 40000 | 0.7880 |
Framework Versions
- Python: 3.12.3
- Sentence Transformers: 5.1.0
- Transformers: 4.56.0
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.22.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}