Files changed (1) hide show
  1. README.md +76 -62
README.md CHANGED
@@ -1,63 +1,77 @@
1
- ---
2
- base_model:
3
- - Qwen/Qwen2.5-7B-Instruct
4
- - Kukedlc/Qwen2.5-7B-Spanish-0.2
5
- tags:
6
- - merge
7
- - mergekit
8
- - lazymergekit
9
- - Qwen/Qwen2.5-7B-Instruct
10
- - Kukedlc/Qwen2.5-7B-Spanish-0.2
11
- ---
12
-
13
- # NeuralQwen-7B-Spanish-Merge
14
-
15
- NeuralQwen-7B-Spanish-Merge is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
- * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
17
- * [Kukedlc/Qwen2.5-7B-Spanish-0.2](https://huggingface.co/Kukedlc/Qwen2.5-7B-Spanish-0.2)
18
-
19
- ## 🧩 Configuration
20
-
21
- ```yaml
22
- models:
23
- - model: Qwen/Qwen2.5-7B
24
- # No parameters necessary for base model
25
- - model: Qwen/Qwen2.5-7B-Instruct
26
- parameters:
27
- density: 0.53
28
- weight: 0.4
29
- - model: Kukedlc/Qwen2.5-7B-Spanish-0.2
30
- parameters:
31
- density: 0.44
32
- weight: 0.6
33
- merge_method: dare_ties
34
- base_model: Qwen/Qwen2.5-7B
35
- parameters:
36
- int8_mask: true
37
- dtype: bfloat16
38
- ```
39
-
40
- ## 💻 Usage
41
-
42
- ```python
43
- !pip install -qU transformers accelerate
44
-
45
- from transformers import AutoTokenizer
46
- import transformers
47
- import torch
48
-
49
- model = "Kukedlc/NeuralQwen-7B-Spanish-Merge"
50
- messages = [{"role": "user", "content": "What is a large language model?"}]
51
-
52
- tokenizer = AutoTokenizer.from_pretrained(model)
53
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
54
- pipeline = transformers.pipeline(
55
- "text-generation",
56
- model=model,
57
- torch_dtype=torch.float16,
58
- device_map="auto",
59
- )
60
-
61
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
62
- print(outputs[0]["generated_text"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
  ```
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-7B-Instruct
4
+ - Kukedlc/Qwen2.5-7B-Spanish-0.2
5
+ tags:
6
+ - merge
7
+ - mergekit
8
+ - lazymergekit
9
+ - Qwen/Qwen2.5-7B-Instruct
10
+ - Kukedlc/Qwen2.5-7B-Spanish-0.2
11
+ language:
12
+ - zho
13
+ - eng
14
+ - fra
15
+ - spa
16
+ - por
17
+ - deu
18
+ - ita
19
+ - rus
20
+ - jpn
21
+ - kor
22
+ - vie
23
+ - tha
24
+ - ara
25
+ ---
26
+
27
+ # NeuralQwen-7B-Spanish-Merge
28
+
29
+ NeuralQwen-7B-Spanish-Merge is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
30
+ * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
31
+ * [Kukedlc/Qwen2.5-7B-Spanish-0.2](https://huggingface.co/Kukedlc/Qwen2.5-7B-Spanish-0.2)
32
+
33
+ ## 🧩 Configuration
34
+
35
+ ```yaml
36
+ models:
37
+ - model: Qwen/Qwen2.5-7B
38
+ # No parameters necessary for base model
39
+ - model: Qwen/Qwen2.5-7B-Instruct
40
+ parameters:
41
+ density: 0.53
42
+ weight: 0.4
43
+ - model: Kukedlc/Qwen2.5-7B-Spanish-0.2
44
+ parameters:
45
+ density: 0.44
46
+ weight: 0.6
47
+ merge_method: dare_ties
48
+ base_model: Qwen/Qwen2.5-7B
49
+ parameters:
50
+ int8_mask: true
51
+ dtype: bfloat16
52
+ ```
53
+
54
+ ## 💻 Usage
55
+
56
+ ```python
57
+ !pip install -qU transformers accelerate
58
+
59
+ from transformers import AutoTokenizer
60
+ import transformers
61
+ import torch
62
+
63
+ model = "Kukedlc/NeuralQwen-7B-Spanish-Merge"
64
+ messages = [{"role": "user", "content": "What is a large language model?"}]
65
+
66
+ tokenizer = AutoTokenizer.from_pretrained(model)
67
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
68
+ pipeline = transformers.pipeline(
69
+ "text-generation",
70
+ model=model,
71
+ torch_dtype=torch.float16,
72
+ device_map="auto",
73
+ )
74
+
75
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
76
+ print(outputs[0]["generated_text"])
77
  ```