Datasets:
Tasks:
Text Generation
Sub-tasks:
language-modeling
Languages:
German
Size:
10M - 100M
ArXiv:
License:
| # Datasheet: German Commons | |
| This is a datasheet compliant with the recommendations of [Gebru et al. (2018)](https://arxiv.org/abs/1803.09010v8), describing the properties of the **German Commons** dataset. | |
| ## Motivation | |
| ### Why was the dataset created? | |
| German Commons addresses the critical gap in large-scale open German | |
| text for language model training. Existing German corpora either lack | |
| explicit licensing, contain web-scraped content of uncertain provenance, | |
| or provide insufficient scale. | |
| ### Has the dataset been used already? | |
| This represents the initial release of German Commons. No external usage | |
| has occurred prior to publication. Constituent dataset may have already been used prior. | |
| ### What (other) tasks could the dataset be used for? | |
| Beyond language model pretraining, German Commons supports all German | |
| NLP research requiring clean, license-compliant text, multilingual model | |
| development, or linguistic analysis of German text across domains. The | |
| diverse domain coverage (legal, cultural, scientific, etc.) further | |
| enables domain-specific model development and cross-domain evaluation | |
| studies. | |
| ### Who funded the creation of the dataset? | |
| Dataset compilation was supported by German and European research | |
| grants: German Federal Ministry of Research, Technology, and Space | |
| (BMFTR) under Grants `01IS24077A`, `01IS24077B`, and `01IS24077D`, by | |
| the ScaDS.AI Center for Scalable Data Analytics and Artificial | |
| Intelligence, funded by the BMFTR and by the Sächsische | |
| Staatsministerium für Wissenschaft, Kultur und Tourismus under Grant | |
| `ScaDS.AI`, and by the OpenWeb-Search.eu project, funded by the | |
| European Union under Grant `GA 101070014`. Constituent datasets | |
| originate primarily from state-funded institutions across Germany and | |
| Austria. | |
| ## Dataset Composition | |
| ### What are the instances? | |
| Each instance represents a single German-language document with | |
| associated metadata and licensing information. | |
| ### How many instances are there in total? | |
| The dataset contains 35,778,211 documents comprising 154,558,196,961 | |
| GPT-2 tokens. | |
| ### What data does each instance consist of? | |
| Each instance includes: a unique identifier for source | |
| cross-referencing, source dataset name, quality-filtered and | |
| paragraph-deduplicated raw text, canonical SPDX license URL, thematic | |
| domain key, GPT-2 token count, a perplexity score calculated using a | |
| KenLM model trained on German Wikipedia text, and a OCR quality score | |
| calculated using [OCRoscope](https://github.com/Pleias/OCRoscope). | |
| ### Is there a label or target associated with each instance? | |
| No supervised labels exist. However, each instance contains metadata | |
| labels for thematic domain classification, licensing information, and | |
| document length statistics. | |
| ### Is any information missing from individual instances? | |
| Paragraph-level deduplication may alter texts from their original form. | |
| Personally identifiable information has been systematically removed. | |
| ### Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? | |
| The dataset represents a filtered subset of source collections. | |
| Filtering removes OCR errors, extraction artifacts, and low-quality or | |
| duplicated content, creating a curated selection. | |
| ### Are there recommended data splits? | |
| No predefined splits are provided. All data is intended for pretraining. | |
| ### Are there any errors, sources of noise, or redundancies in the dataset? | |
| Despite quality filtering and deduplication, residual issues may remain: | |
| cross-corpus text duplicates from overlapping sources, and extraction | |
| artifacts from OCR and PDF-to-text processing. | |
| ### Is the dataset self-contained, or does it link to or otherwise rely on external resources? | |
| The dataset is self-contained and centrally downloadable. The Source | |
| dataset references provided enable reproducible reconstruction. | |
| ## Collection Process | |
| ### What mechanisms or procedures were used to collect the data? | |
| Data collection employed multiple automated procedures: direct download | |
| from institutional repositories and open platforms, programmatic | |
| crawling via APIs where available, and automated text extraction from | |
| PDF and other document formats using specialized libraries. Then, the | |
| open source processing pipelines were applied for quality filtering and | |
| deduplication all sources. Validation occurred through manual inspection | |
| of sample outputs, cross-verification against source repositories, and | |
| automated consistency checks. | |
| ### How was the data associated with each instance acquired? | |
| All text data represents directly observable content from original | |
| sources; no inference or derivation occurred. Metadata (licensing, | |
| thematic classification, source attribution) was extracted directly from | |
| source repository information or explicitly provided by institutional | |
| datasets. Where PDF extraction was required, raw text underwent | |
| validation against source documents to verify accuracy. | |
| ### If the dataset is a sample from a larger set, what was the sampling strategy? | |
| Sampling was deterministic based on explicit criteria: German language | |
| content as per automated classification explicit open licensing, quality | |
| thresholds, and institutional source verification. No probabilistic | |
| sampling occurred; all content meeting inclusion criteria was retained | |
| after deduplication. | |
| ### Who was involved in the data collection process and how were they compensated? | |
| Data collection was conducted by the author team using automated | |
| systems. No crowdworkers, contractors, or external annotators were | |
| employed. All processing occurred through programmatic methods without | |
| manual content creation or labeling requiring compensation. | |
| ### Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances? | |
| Collection occurred between January and August 2025, using source | |
| dataset versions available through August 31st, 2025. The underlying | |
| content creation spans multiple centuries, representing a temporal range | |
| that significantly predates and extends beyond the collection timeframe. | |
| ## Data Preprocessing | |
| ### Was any preprocessing/cleaning/labeling of the data done? | |
| Comprehensive preprocessing included: text extraction from PDFs and OCR | |
| sources with encoding normalization, language detection and filtering | |
| for German content, and quality filtering targeting digitization | |
| artifacts and extraction errors, paragraph-level deduplication using | |
| content hashing, systematic PII removal, format standardization across | |
| all source types. Thematic domain classification was applied based on | |
| source dataset. | |
| ### Was the raw data saved in addition to the preprocessed/cleaned/labeled data? | |
| Raw data is not provided since all constituent source datasets remain | |
| publicly accessible through their original repositories. | |
| ### Is the software used to preprocess/clean/label the instances available? | |
| All preprocessing software is open source and available at | |
| <https://github.com/coral-nlp/llmdata> , ensuring complete | |
| reproducibility of the dataset. | |
| ### Does this dataset collection/processing procedure achieve the motivation for creating the dataset stated in the first section of this datasheet? | |
| Yes. The procedure successfully addresses the identified gap by: | |
| providing the largest collection to-date of openly licensed German text, | |
| enabling open German language model development without licensing | |
| uncertainties, and establishing reproducible methodology for future | |
| dataset construction. This directly fulfills the stated motivation of | |
| creating license-compliant, large-scale German training data. | |
| ### How will the dataset be distributed? | |
| The dataset is distributed as Parquet files through multiple public | |
| repositories for redundancy. Primary distribution occurs via Hugging | |
| Face Hub at <https://huggingface.co/datasets/coral-nlp/german-commons>. | |
| ### When will the dataset be released/first distributed? What license (if any) is it distributed under? | |
| Public release occurred on 2025/10/14. Dataset metadata and compilation | |
| are licensed under ODC-BY 1.0 (<https://opendatacommons.org/licenses/by/1-0/>). Individual document texts retain | |
| their original licenses as specified in each instance's SPDX URL field, | |
| creating a heterogeneous but fully documented licensing structure. | |
| ### Are there any copyrights on the data? | |
| Yes. Each document retains copyright under its original creator or | |
| institutional provider, governed by the specific license indicated in | |
| the instance metadata. The compilation itself does not claim additional | |
| copyright over constituent texts. | |
| ### Are there any fees or access/export restrictions? | |
| The dataset is freely accessible without fees or registration | |
| requirements. However, users must comply with individual document | |
| licenses, which may include attribution requirements or share-alike | |
| provisions. Commercial use is permitted by all constituent licenses. | |
| ## Dataset Maintenance | |
| ### Who is supporting/hosting/maintaining the dataset? | |
| The dataset is maintained by the authors of this report. | |
| ### Will the dataset be updated? If so, how often and by whom? | |
| Updates may occur when significant new German open-source collections | |
| become available. The original authors will coordinate updates, with | |
| community contributions welcomed through the open-source pipeline. | |
| ### How will updates be communicated? | |
| Updates will be announced through: versioned releases on hosting | |
| platforms with detailed changelogs, academic publication updates when | |
| substantial changes occur. | |
| ### If the dataset becomes obsolete how will this be communicated? | |
| Obsolescence will be communicated through deprecation notices on all | |
| hosting platforms. | |
| ### Is there a repository to link to any/all papers/systems that use this dataset? | |
| No centralized usage repository will be maintained. Usage tracking | |
| occurs through standard academic citation of the dataset paper. Users | |
| are encouraged to cite the dataset publication when reporting results or | |
| building derivative works. | |
| ### If others want to extend/augment/build on this dataset, is there a mechanism for them to do so? | |
| The open-source `llmdata` pipeline enables community extensions through | |
| standardized data ingestion protocols for new sources and automated | |
| quality assessment and deduplication using established filtering | |
| criteria. Community contributions undergo review by the maintenance | |
| team. | |
| ## Ethical Considerations | |
| ### Were any ethical review processes conducted? | |
| No formal institutional review board process was conducted. The dataset | |
| relies exclusively on pre-existing, publicly available, and explicitly | |
| licensed materials from established institutional sources. Data | |
| processing incorporated ethical considerations including systematic PII | |
| removal and exclusion of sources lacking clear licensing frameworks. | |
| ### Does the dataset contain data that might be considered confidential? | |
| No. All included content derives from explicitly open-licensed | |
| institutional sources. | |
| ### Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? | |
| Potentially yes. The dataset spans centuries of German text documents, | |
| which may include historical perspectives, political viewpoints, or | |
| language that could be considered offensive by contemporary standards. | |
| The scale and temporal range make comprehensive content moderation | |
| infeasible. Users should exercise appropriate caution. | |
| ### Does the dataset relate to people? | |
| The dataset may contain publicly available information relating to | |
| individuals in various contexts including historical documents, | |
| biographical information, academic citations, and government records. | |