The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

HPLT

This is a large-scale collection of web-crawled documents in 198 world languages, produced by the HPLT project. The source of the data is Internet Archive and Common Crawl. For a detailed description of this and previous releases by HPLT, please refer to our website.

HPLT release v3.0

In July 2025, the European HPLT initiative has completed a new release of its monolingual datasets, offering better data quality, more annotations and metadata, and greatly increased volume. HPLT Monolingual Datasets 3.0 comprise some 50 terabytes of compressed data, covering 198 languages. More than half of the data represents the English language. Not counting the English majority portion, the dataset offers about 11.5 billion documents, 40 trillion Unicode characters, or 13.5 trillion tokens (using the Gemma 3 vocabulary). Overall, HPLT 3.0 is about three times larger than the previous release and likely constitutes the largest generally available multilingual dataset.

The dataset has been derived from some 7.2 petabytes of raw web crawls from the Internet Archive and the Common Crawl, spanning the period between 2012 and 2024. Text extraction from HTML documents was performed through the Trafilatura library, language identification with OpenLID 2.0, and deduplication, annotation, and filtering through the Monotextor pipeline.

Except quality and size, other distinguishing properties of the HPLT Monolingual Dataset is its sorting by a language-independent estimate of document quality and the rich annotations and metadata, including web register labels (for 104 of the languages in release 3.0), document- and segment-level language identification, annotation of personally identifiable information, and provenance information from the original crawl. Release 3.0 also fixes a deficiency in the Chinese data in the previous release, where double-width punctuation had been over-zealously normalized.

Except for Chinese, English, and Russian, each language-specific portion has been globally deduplicated.

Data processing was performed on dedicated storage and compute resources at the Czech and Norwegian national HPC infrastructures CESNET and Sigma2 NRIS, as well as on the EuroHPC LUMI system. The HPLT download site is hosted at the Sigma2 NIRD datalake.

Downloading HPLT v3.0

For each language, the data is organized in smaller shards, sorted by document quality estimates (WDS). For Russian (in Cyrillic script), for example, the file rus_Cyrl/10_1.jsonl.zst is the first (and only) shard in the top WDS bin (scored as exactly 10), and rus_Cyrl/9_1.jsonl.zstrus_Cyrl/9_103.jsonl.zstare the 103 shards in the bin for scores greater or equal to WDS9and less than10`.

The easiest way to download the data for a specific language is to use a command like wget -i with a language-specific mapping file containing full download addresses for all shards of this particular language, for example (for Crimean Tatar in Latin script):

wget -O - https://data.hplt-project.org/three/sorted/crh_Latn.map | wget -x -nH --cut-dirs=2 -i -

The above command retrieves the map for chr_Latn and feeds it as a list of download addresses into a second wget invocation, requesting the creation of local directories (-x), but cutting off the host and first two directory components (-nH --cut-dirs=2).

To download all available data, there is a larger mapping file for the full multilingual (excluding English) portion, amounting to a download of around 20 terabytes. The complete English data comprises some 30 terabytes and can be downloaded using its per-language mapping file. These can be retrieved using e.g. wget, and used as input directives for larger downloads, much like in the example above:

wget https://data.hplt-project.org/three/sorted/multilingual.map

wget https://data.hplt-project.org/three/sorted/eng_Latn.map

To speed up large downloads, it can be beneficial to use multiple parallel connections, for example using the --max-threads option in wget. We recommend to limit download parallelization to 16–32 threads, to avoid server-side rate limitations, which should allow download rates of around 250 gigabytes per hour.

New in this release compared to HPLT v2

  • Reflects substantially more raw web data, primarily from the Common Crawl
  • Additional metadata, including more information from the underlying crawl
  • Upgrade to Trafilatura 2.0 with empirical fine-tuning of extraction parameters
  • Plain-text and structured document representation, in simple, normalized XML
  • Better language identification; refined codes for Arabic and Chinese
  • Global deduplication for most languages; MinHash cluster size as metadata
  • Annotation with Turku web register labels for more than half the languages
  • Upgrade to newer, improved Web Docs Scorer (WDS) document quality estimates
  • Global sorting within each language by WDS and sharding into WDS bins (10–5)
  • Improved filtering for robots.txt opt-out, adult content, and credentials
  • Improved deduplication pipeline (global deduplication for most languages)

Statistics & validation

Summary statistics per language are available for download as a structured manifest.json, also including download links for the individual data files, per-language maps, and sample documents from various quality bins. Additionally, each language subdirectory provides compressed lists of unique domains, full URLs, and what are called normalized document signatures, together with their frequencies of occurence, for example nob_Latn/.domains.zst, nob_Latn/.urls.zst, and nob_Latn/.signatures.zst for Norwegian Bokmål.

The counts of documents per language or total storage sizes in the above statistics could be used to approximately validate each language sub-directory, but for more thorough validation of individual data files or full downloads, MD5 checksum files are provided with naming conventions parallel to the data and per-language map files. For example: nob_Latn/.10_1.jsonl.md5 for the first data file in Norwegian Bokmål, and nob_Latn.md5 for its full set of data files.

License and takedown

License

These data are released under this licensing scheme:

Notice and take down policy

Notice: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:

  • Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
  • Clearly identify the copyrighted work claimed to be infringed.
  • Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
  • You can reach us at [email protected]

Take down: We will comply to legitimate requests by removing the affected sources from the next release of the corpora.

  • It is your responsibility that any use of the data complies with any applicable legal framework, such as, among others, the EU Copyright Directive 2019/790 and the General Data Protection Regulation 2018, as amended.

Funding

This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government’s Horizon Europe funding guarantee (grant number 10052546)

Downloads last month
6