Generative-Subodh's picture
Update README.md
a952a53 verified
metadata
library_name: peft
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
  - base_model:adapter:microsoft/mdeberta-v3-base
  - lora
  - transformers
metrics:
  - name: accuracy
    type: accuracy
    value: 0.9541
  - name: f1
    type: f1
model-index:
  - name: Subodh_MFND_mdeberta_v3
    results:
      - task:
          type: text-classification
          name: Multilingual Fake News Detection
        dataset:
          name: Custom Multilingual Fake News
          type: text
        metrics:
          - name: accuracy
            type: accuracy
            value: 0.9541
          - name: f1
            type: f1
            value: 0.95

Subodh_MFND_mdeberta_v3

This model is a LoRA fine-tuned version of microsoft/mdeberta-v3-base for multilingual fake news detection (Bangla, English, Hindi, Spanish).
Final evaluation set results:

  • Accuracy: 95.41%
  • F1: 0.95
  • (Precision/Recall can be filled in if you have them.)

Model description

  • Privacy-preserved, multi-lingual fake news detection.
  • Fine-tuned with LoRA adapters (r=8, α=16, dropout=0.1).
  • Batch size: 8, Epochs: 3, Learning rate: 2e-4.

Intended uses & limitations

  • Intended for research and production on multilingual fake news detection tasks.
  • Works on Bangla, English, Hindi, and Spanish news content.
  • Not intended for languages outside the fine-tuning set.

Training and evaluation data

  • Dataset: Custom multilingual fake news corpus (Bangla, English, Hindi, Spanish)
  • Supervised classification (fake/real)

Training procedure

Training hyperparameters

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: AdamW
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
0.4942 1.0 9375 0.4617 0.7785 0.7776
0.4948 2.0 18750 0.4684 0.7591 0.7424
0.4892 3.0 28125 0.4376 0.7702 0.7569
Final Test - - - 0.9541 0.95

Framework versions

  • PEFT 0.17.1
  • Transformers 4.56.1
  • Pytorch 2.8.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.22.0