Fix the wrong adapter in qwen2-moe-qlora example (#1501) [skip ci]
Browse files
examples/qwen/qwen2-moe-qlora.yaml
CHANGED
|
@@ -16,7 +16,7 @@ sequence_len: 1024 # supports up to 32k
|
|
| 16 |
sample_packing: false
|
| 17 |
pad_to_sequence_len: false
|
| 18 |
|
| 19 |
-
adapter:
|
| 20 |
lora_model_dir:
|
| 21 |
lora_r: 32
|
| 22 |
lora_alpha: 16
|
|
|
|
| 16 |
sample_packing: false
|
| 17 |
pad_to_sequence_len: false
|
| 18 |
|
| 19 |
+
adapter: qlora
|
| 20 |
lora_model_dir:
|
| 21 |
lora_r: 32
|
| 22 |
lora_alpha: 16
|