Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
lmflow-optimalscale commited on
Commit
f2874b5
·
1 Parent(s): 28725cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -1,4 +1,11 @@
1
- ClimbLab is a high-quality pre-training corpus released by NVIDIA. Here is the description:
 
 
 
 
 
 
 
2
 
3
  >ClimbLab is a filtered 1.2-trillion-token corpus with 20 clusters.
4
  Based on Nemotron-CC and SmolLM-Corpus, we employed our proposed CLIMB-clustering to semantically reorganize and filter this combined dataset into 20 distinct clusters, leading to a 1.2-trillion-token high-quality corpus. Specifically, we first grouped the data into 1,000 groups based on topic information. Then we applied two classifiers: one to detect advertisements and another to assess the educational value of the text. Each group was scored accordingly, and low-quality data with low scores was removed.
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ ---
8
+ [ClimbLab](https://huggingface.co/datasets/nvidia/ClimbLab) is a high-quality pre-training corpus released by NVIDIA. Here is the description:
9
 
10
  >ClimbLab is a filtered 1.2-trillion-token corpus with 20 clusters.
11
  Based on Nemotron-CC and SmolLM-Corpus, we employed our proposed CLIMB-clustering to semantically reorganize and filter this combined dataset into 20 distinct clusters, leading to a 1.2-trillion-token high-quality corpus. Specifically, we first grouped the data into 1,000 groups based on topic information. Then we applied two classifiers: one to detect advertisements and another to assess the educational value of the text. Each group was scored accordingly, and low-quality data with low scores was removed.