Langcache-embed-v3
Collection
A collection of open embedding models for semantic caching.
•
7 items
•
Updated
This is a sentence-transformers model finetuned from Alibaba-NLP/gte-modernbert-base on the LangCache Sentence Pairs (all) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for sentence pair similarity.
SentenceTransformer(
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("redis/langcache-embed-v3")
# Run inference
sentences = [
'"If you click ""like"" on an old post that someone made on your wall yet you\'re no longer Facebook friends, will they still receive a notification?"',
'"If you click ""like"" on an old post that someone made on your wall yet you\'re no longer Facebook friends, will they still receive a notification?"',
'"If your teenage son posted ""La commedia e finita"" on his Facebook wall, would you be concerned?"',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 1.0000, 0.6758],
# [1.0000, 1.0000, 0.6758],
# [0.6758, 0.6758, 1.0078]], dtype=torch.bfloat16)
testir_evaluator.CustomInformationRetrievalEvaluator| Metric | Value |
|---|---|
| cosine_accuracy@1 | 0.5956 |
| cosine_precision@1 | 0.5956 |
| cosine_recall@1 | 0.5781 |
| cosine_ndcg@10 | 0.7776 |
| cosine_mrr@1 | 0.5956 |
| cosine_map@100 | 0.7276 |
| cosine_auc_precision_cache_hit_ratio | 0.364 |
| cosine_auc_similarity_distribution | 0.154 |
anchor, positive, and negative| anchor | positive | negative | |
|---|---|---|---|
| type | string | string | string |
| details |
|
|
|
| anchor | positive | negative |
|---|---|---|
What high potential jobs are there other than computer science? |
What high potential jobs are there other than computer science? |
Why IT or Computer Science jobs are being over rated than other Engineering jobs? |
Would India ever be able to develop a missile system like S300 or S400 missile? |
Would India ever be able to develop a missile system like S300 or S400 missile? |
Should India buy the Russian S400 air defence missile system? |
water from the faucet is being drunk by a yellow dog |
A yellow dog is drinking water from the faucet |
Childlessness is low in Eastern European countries. |
losses.ArcFaceInBatchLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
anchor, positive, and negative| anchor | positive | negative | |
|---|---|---|---|
| type | string | string | string |
| details |
|
|
|
| anchor | positive | negative |
|---|---|---|
What high potential jobs are there other than computer science? |
What high potential jobs are there other than computer science? |
Why IT or Computer Science jobs are being over rated than other Engineering jobs? |
Would India ever be able to develop a missile system like S300 or S400 missile? |
Would India ever be able to develop a missile system like S300 or S400 missile? |
Should India buy the Russian S400 air defence missile system? |
water from the faucet is being drunk by a yellow dog |
A yellow dog is drinking water from the faucet |
Childlessness is low in Eastern European countries. |
losses.ArcFaceInBatchLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
| Epoch | Step | test_cosine_ndcg@10 |
|---|---|---|
| -1 | -1 | 0.7776 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
Base model
answerdotai/ModernBERT-base