mdeberta-ontonotes5
This is a multilingual DeBERTa model fine-tuned for Named Entity Recognition (NER) task. It is based on the rustemgareev/mdeberta-v3-base-lite model.
Usage
from transformers import pipeline
# Initialize the NER pipeline
ner_pipeline = pipeline(
"token-classification",
model="rustemgareev/mdeberta-ontonotes5",
aggregation_strategy="simple"
)
# Example text
text = "Apple Inc. is looking at buying a U.K. startup for $1 billion in London next week."
# Get predictions
entities = ner_pipeline(text)
# Print the results
for entity in entities:
print(f"Entity: {entity['word']}, Label: {entity['entity_group']}, Score: {entity['score']:.4f}")
# Expected output:
# Entity: Apple Inc., Label: ORGANIZATION, Score: 0.9975
# Entity: U.K., Label: GPE, Score: 0.9956
# Entity: $1 billion, Label: MONEY, Score: 0.9981
# Entity: London, Label: GPE, Score: 0.9981
# Entity: next week, Label: DATE, Score: 0.9940
License
This model is distributed under the MIT License.
- Downloads last month
- 37
Model tree for rustemgareev/mdeberta-ontonotes5
Base model
rustemgareev/mdeberta-v3-base-lite