howanching-clara commited on
Commit
266b587
·
verified ·
1 Parent(s): 69fdb4f

End of training

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
- base_model: sentence-transformers/multi-qa-MiniLM-L6-cos-v1
5
  metrics:
6
  - accuracy
7
  - precision
@@ -19,11 +19,11 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [sentence-transformers/multi-qa-MiniLM-L6-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 3.7795
23
- - Accuracy: 0.4469
24
- - Precision: 0.4469
25
- - Recall: 0.4469
26
- - F1: 0.4469
27
 
28
  ## Model description
29
 
@@ -42,9 +42,9 @@ More information needed
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
45
- - learning_rate: 0.0001
46
- - train_batch_size: 4
47
- - eval_batch_size: 4
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
@@ -52,23 +52,23 @@ The following hyperparameters were used during training:
52
 
53
  ### Training results
54
 
55
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
56
- |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
57
- | 2.1337 | 1.0 | 687 | 1.9092 | 0.3508 | 0.3508 | 0.3508 | 0.3508 |
58
- | 1.8555 | 2.0 | 1374 | 1.9211 | 0.4061 | 0.4061 | 0.4061 | 0.4061 |
59
- | 1.5715 | 3.0 | 2061 | 1.8437 | 0.4556 | 0.4556 | 0.4556 | 0.4556 |
60
- | 1.3431 | 4.0 | 2748 | 1.9205 | 0.4512 | 0.4512 | 0.4512 | 0.4512 |
61
- | 1.1965 | 5.0 | 3435 | 2.3187 | 0.4425 | 0.4425 | 0.4425 | 0.4425 |
62
- | 0.8733 | 6.0 | 4122 | 2.5876 | 0.4585 | 0.4585 | 0.4585 | 0.4585 |
63
- | 0.742 | 7.0 | 4809 | 2.7835 | 0.4571 | 0.4571 | 0.4571 | 0.4571 |
64
- | 0.6883 | 8.0 | 5496 | 2.9692 | 0.4687 | 0.4687 | 0.4687 | 0.4687 |
65
- | 0.5149 | 9.0 | 6183 | 3.4857 | 0.4425 | 0.4425 | 0.4425 | 0.4425 |
66
- | 0.4392 | 10.0 | 6870 | 3.3746 | 0.4425 | 0.4425 | 0.4425 | 0.4425 |
67
- | 0.3495 | 11.0 | 7557 | 3.7080 | 0.4425 | 0.4425 | 0.4425 | 0.4425 |
68
- | 0.2802 | 12.0 | 8244 | 3.5762 | 0.4469 | 0.4469 | 0.4469 | 0.4469 |
69
- | 0.2958 | 13.0 | 8931 | 3.7652 | 0.4352 | 0.4352 | 0.4352 | 0.4352 |
70
- | 0.2162 | 14.0 | 9618 | 3.7253 | 0.4410 | 0.4410 | 0.4410 | 0.4410 |
71
- | 0.2158 | 15.0 | 10305 | 3.7795 | 0.4469 | 0.4469 | 0.4469 | 0.4469 |
72
 
73
 
74
  ### Framework versions
 
1
  ---
2
+ base_model: sentence-transformers/multi-qa-MiniLM-L6-cos-v1
3
  tags:
4
  - generated_from_trainer
 
5
  metrics:
6
  - accuracy
7
  - precision
 
19
 
20
  This model is a fine-tuned version of [sentence-transformers/multi-qa-MiniLM-L6-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 1.8218
23
+ - Accuracy: 0.4629
24
+ - Precision: 0.4629
25
+ - Recall: 0.4629
26
+ - F1: 0.4629
27
 
28
  ## Model description
29
 
 
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
45
+ - learning_rate: 2e-05
46
+ - train_batch_size: 16
47
+ - eval_batch_size: 16
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
 
52
 
53
  ### Training results
54
 
55
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
56
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
57
+ | No log | 1.0 | 172 | 2.2174 | 0.2926 | 0.2926 | 0.2926 | 0.2926 |
58
+ | No log | 2.0 | 344 | 2.0265 | 0.3057 | 0.3057 | 0.3057 | 0.3057 |
59
+ | 2.3643 | 3.0 | 516 | 1.9485 | 0.3712 | 0.3712 | 0.3712 | 0.3712 |
60
+ | 2.3643 | 4.0 | 688 | 1.8710 | 0.4250 | 0.4250 | 0.4250 | 0.4250 |
61
+ | 2.3643 | 5.0 | 860 | 1.8507 | 0.4338 | 0.4338 | 0.4338 | 0.4338 |
62
+ | 1.8634 | 6.0 | 1032 | 1.8297 | 0.4469 | 0.4469 | 0.4469 | 0.4469 |
63
+ | 1.8634 | 7.0 | 1204 | 1.7612 | 0.4658 | 0.4658 | 0.4658 | 0.4658 |
64
+ | 1.8634 | 8.0 | 1376 | 1.8224 | 0.4600 | 0.4600 | 0.4600 | 0.4600 |
65
+ | 1.5986 | 9.0 | 1548 | 1.7700 | 0.4629 | 0.4629 | 0.4629 | 0.4629 |
66
+ | 1.5986 | 10.0 | 1720 | 1.7942 | 0.4672 | 0.4672 | 0.4672 | 0.4672 |
67
+ | 1.5986 | 11.0 | 1892 | 1.8150 | 0.4643 | 0.4643 | 0.4643 | 0.4643 |
68
+ | 1.3916 | 12.0 | 2064 | 1.8018 | 0.4585 | 0.4585 | 0.4585 | 0.4585 |
69
+ | 1.3916 | 13.0 | 2236 | 1.8006 | 0.4745 | 0.4745 | 0.4745 | 0.4745 |
70
+ | 1.3916 | 14.0 | 2408 | 1.8110 | 0.4614 | 0.4614 | 0.4614 | 0.4614 |
71
+ | 1.274 | 15.0 | 2580 | 1.8218 | 0.4629 | 0.4629 | 0.4629 | 0.4629 |
72
 
73
 
74
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7bf8f8788e99d6510ff53158586a1dadda915a3ffa17ab86a6706b11a0d72ec7
3
  size 90923400
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c90270960ca546964a4b49603be9bc545ce5bdba60a97b6051c92efc0e02d92
3
  size 90923400
runs/Apr29_00-45-43_PC-AJ/events.out.tfevents.1714344343.PC-AJ.39420.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cbf0e8d8da0a0c6d008be3d6853735dc1d65fa1fe17226e31f07f75837057e00
3
- size 13668
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c48c3698517681fb8fb4859a7f2d41848f47cdcf9e615627089bdc1198007491
3
+ size 14705