README / README.md
Samoed's picture
Update README.md
5cdbf21 verified
---
title: README
emoji: 🌍
colorFrom: purple
colorTo: red
sdk: static
pinned: false
---
MTEB is a Python framework for evaluating embeddings and retrieval systems for both text and image.
MTEB covers more than 1000 languages and diverse tasks, from classics like classification and clustering to use-case specialized tasks such as legal, code, or healthcare retrieval.
You can get started using [`mteb`](https://github.com/embeddings-benchmark/mteb), check out our [documentation](https://embeddings-benchmark.github.io/mteb/usage/get_started/).
| Overview | |
|--------------------------------|--------------------------------------------------------------------------------------|
| πŸ“ˆ [Leaderboard] | The interactive leaderboard of the benchmark |
| **Get Started**. | |
| πŸƒ [Get Started] | Overview of how to use mteb |
| πŸ€– [Defining Models] | How to use existing model and define custom ones |
| πŸ“‹ [Selecting tasks] | How to select tasks, benchmarks, splits etc. |
| 🏭 [Running Evaluation] | How to run the evaluations, including cache management, speeding up evaluations etc. |
| πŸ“Š [Loading Results] | How to load and work with existing model results |
| **Overview**. | |
| πŸ“‹ [Tasks] | Overview of available tasks |
| πŸ“ [Benchmarks] | Overview of available benchmarks |
| πŸ€– [Models] | Overview of available Models |
| **Contributing** | |
| πŸ€– [Adding a model] | How to submit a model to MTEB and to the leaderboard |
| πŸ‘©β€πŸ’» [Adding a dataset] | How to add a new task/dataset to MTEB |
| πŸ‘©β€πŸ’» [Adding a benchmark] | How to add a new benchmark to MTEB and to the leaderboard |
| 🀝 [Contributing] | How to contribute to MTEB and set it up for development |
[Get Started]: https://embeddings-benchmark.github.io/mteb/usage/get_started/
[Defining Models]: https://embeddings-benchmark.github.io/mteb/usage/defining_the_model/
[Selecting tasks]: https://embeddings-benchmark.github.io/mteb/usage/selecting_tasks/
[Running Evaluation]: https://embeddings-benchmark.github.io/mteb/usage/running_the_evaluation/
[Loading Results]: https://embeddings-benchmark.github.io/mteb/usage/loading_results/
[Tasks]: https://embeddings-benchmark.github.io/mteb/overview/available_tasks/any2anymultilingualretrieval/
[Benchmarks]: https://embeddings-benchmark.github.io/mteb/overview/available_benchmarks/
[Models]: https://embeddings-benchmark.github.io/mteb/overview/available_models/text/
[Contributing]: https://embeddings-benchmark.github.io/mteb/CONTRIBUTING/
[Adding a model]: https://embeddings-benchmark.github.io/mteb/contributing/adding_a_model/
[Adding a dataset]: https://embeddings-benchmark.github.io/mteb/contributing/adding_a_dataset/
[Adding a benchmark]: https://embeddings-benchmark.github.io/mteb/contributing/adding_a_benchmark/
[Leaderboard]: https://huggingface.co/spaces/mteb/leaderboard