AI & ML interests

OpenDataLab provides high-quality open datasets and tools for large models. China Large model corpus Data Alliance open source data service designated platform

Recent Activity

momo199  updated a Space about 15 hours ago
opendatalab/README
myhloli  updated a Space about 19 hours ago
opendatalab/MinerU
View all activity

English🌎|简体中文🀄

📚 In 2025, we have open-sourced a high-quality multilingual dataset, WanJuan 3.0 (WanJuan Silu)

🧾 ​​January 2025: Initial Release of Multilingual Pre-training Corpus​​: Primarily text-based data.Collected publicly available web content, literature, patents, and more from 5 countries/regions.Total data size exceeds ​​1.2TB​​, with ​​300 billion tokens​​, achieving international leadership.The initial release includes ​​Thai, Russian, Arabic, Korean, and Vietnamese​​ sub-corpora, each exceeding ​​150GB​​.Leveraging the ​​"InternLM" Intelligent Tagging System​​, the research team categorized each sub-corpus into ​​7 major classes​​ (e.g., history, politics, culture, real estate, shopping, weather, dining, encyclopedias, professional knowledge) and ​​32 sub-classes​​, ensuring localized linguistic and cultural relevance.Designed for researchers to easily retrieve data for diverse needs.
​​Download Links​​: RussianArabicKoreanVietnameseThai.


🌏 ​​March 2025: Second Release of Multilingual Multimodal Corpus​​: which comprises over 1.2TB of indigenous textual corpora from five countries. Each subset includes seven major categories and 34 subcategories, covering a wide range of local characteristics, such as history, politics, culture, real estate, shopping, weather, dining, encyclopedic knowledge, and professional expertise. Here are the download links for the five subsets, and we welcome everyone to download and use them.

Comprises ​​4 data types​​:

  • Image-Text​​: Over ​​2 million images​​ (raw size: 362.174GB).
  • Audio-Text​​: ​​200 hours​​ of ultra-high-precision annotated audio per language.
  • Video-Text​​: Over ​​8 million video clips​​ (raw duration: 28,000+ hours; refined to 16,000+ hours of high-quality content).
  • Localized SFT (Supervised Fine-Tuning)​​:184,000 SFT entries​​ covering local culture, daily conversations, code, mathematics, and science.​​23,000 entries per language​​, including ​​3,000 culturally unique Q&A pairs designed by local residents​​ and ​​20,000 translated entries​​ filtered through a quality-check pipeline combining rules and model scoring.Covers ​​8 languages​​ across ​​4 modalities​​, totaling ​​11.5 million entries​​, refined to industrial-grade quality for "ready-to-use" applications.
    Download Links​​: 5 languages (Arabic, Russian, Korean, Vietnamese, Thai)3 languages (Serbian, Hungarian, Czech).

🔥🔥🔥OpenDataLab Provide ecology for high-quality datasets for community. It provides:

🌟Extensive open data resources for AI Model

● High-speed and simple way to access open datasets
● 7700+ Large scale and high-quality open datasets for large model
● 1200+ Open datasets for Computer Vision
● 200+ Open datasets by CVPR
● Categorized datasets for hot topics

✨Open-source data processing toolkits

● Data acquisition toolkits supporting large datasets
● Data acquisition toolkits supporting kinds of tasks
● Open source intelligent Toolbox for Labeling

💫Dataset description language

● Format standardization
● DSDL: Dataset Description Language
● Define a CV dataset by DSDL
● OpenDataLab Standardized 100+ CV Datasets

Check our tutorials videos (in Chinese) to get started.


📣 We have upgraded and launched the function of authors uploading datasets independently. We hereby invite you to participate in using it to better promote your open source datasets, AI research results, etc., so that more people can access, obtain and use your dataset.

This is an introduction to the dataset autonomous upload function 【help doc】,You can create and share your dataset according to our guidelines. 💪

If you have any questions or obstacles, please feel free to contact us [email protected].