Files changed (1) hide show
  1. README.md +62 -49
README.md CHANGED
@@ -1,49 +1,62 @@
1
- ---
2
- base_model:
3
- - Qwen/Qwen2.5-32B-Instruct
4
- - Qwen/Qwen2.5-32B
5
- - Qwen/QwQ-32B-Preview
6
- - deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
7
- - Qwen/Qwen2.5-Coder-32B-Instruct
8
- library_name: transformers
9
- tags:
10
- - mergekit
11
- - merge
12
-
13
- ---
14
- # output-model-directory
15
-
16
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
17
-
18
- ## Merge Details
19
- ### Merge Method
20
-
21
- This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base.
22
-
23
- ### Models Merged
24
-
25
- The following models were included in the merge:
26
- * [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
27
- * [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview)
28
- * [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
29
- * [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)
30
-
31
- ### Configuration
32
-
33
- The following YAML configuration was used to produce this model:
34
-
35
- ```yaml
36
-
37
- models:
38
- - model: Qwen/Qwen2.5-32B-Instruct
39
- - model: Qwen/Qwen2.5-Coder-32B-Instruct
40
- - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
41
- - model: Qwen/QwQ-32B-Preview
42
- base_model: Qwen/Qwen2.5-32B
43
- merge_method: model_stock
44
- parameters:
45
- normalize: true
46
- dtype: bfloat16
47
- tokenizer_source: Qwen/Qwen2.5-Coder-32B-Instruct
48
-
49
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-32B-Instruct
4
+ - Qwen/Qwen2.5-32B
5
+ - Qwen/QwQ-32B-Preview
6
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
7
+ - Qwen/Qwen2.5-Coder-32B-Instruct
8
+ library_name: transformers
9
+ tags:
10
+ - mergekit
11
+ - merge
12
+ language:
13
+ - zho
14
+ - eng
15
+ - fra
16
+ - spa
17
+ - por
18
+ - deu
19
+ - ita
20
+ - rus
21
+ - jpn
22
+ - kor
23
+ - vie
24
+ - tha
25
+ - ara
26
+ ---
27
+ # output-model-directory
28
+
29
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
30
+
31
+ ## Merge Details
32
+ ### Merge Method
33
+
34
+ This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base.
35
+
36
+ ### Models Merged
37
+
38
+ The following models were included in the merge:
39
+ * [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
40
+ * [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview)
41
+ * [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
42
+ * [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)
43
+
44
+ ### Configuration
45
+
46
+ The following YAML configuration was used to produce this model:
47
+
48
+ ```yaml
49
+
50
+ models:
51
+ - model: Qwen/Qwen2.5-32B-Instruct
52
+ - model: Qwen/Qwen2.5-Coder-32B-Instruct
53
+ - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
54
+ - model: Qwen/QwQ-32B-Preview
55
+ base_model: Qwen/Qwen2.5-32B
56
+ merge_method: model_stock
57
+ parameters:
58
+ normalize: true
59
+ dtype: bfloat16
60
+ tokenizer_source: Qwen/Qwen2.5-Coder-32B-Instruct
61
+
62
+ ```