| url
				 stringlengths 58 61 | repository_url
				 stringclasses 1
				value | labels_url
				 stringlengths 72 75 | comments_url
				 stringlengths 67 70 | events_url
				 stringlengths 65 68 | html_url
				 stringlengths 46 51 | id
				 int64 599M 3.22B | node_id
				 stringlengths 18 32 | number
				 int64 1 7.68k | title
				 stringlengths 1 290 | user
				 dict | labels
				 listlengths 0 4 | state
				 stringclasses 2
				values | locked
				 bool 1
				class | assignee
				 dict | assignees
				 listlengths 0 4 | milestone
				 dict | comments
				 listlengths 0 30 | created_at
				 timestamp[ns, tz=UTC]date 2020-04-14 10:18:02 2025-07-12 04:48:30 | updated_at
				 timestamp[ns, tz=UTC]date 2020-04-27 16:04:17 2025-07-12 17:43:05 | closed_at
				 timestamp[ns, tz=UTC]date 2020-04-14 12:01:40 2025-07-11 17:42:01 ⌀ | author_association
				 stringclasses 4
				values | type
				 null | active_lock_reason
				 null | sub_issues_summary
				 dict | body
				 stringlengths 0 228k ⌀ | closed_by
				 dict | reactions
				 dict | timeline_url
				 stringlengths 67 70 | performed_via_github_app
				 null | state_reason
				 stringclasses 4
				values | draft
				 bool 2
				classes | pull_request
				 dict | is_pull_request
				 bool 2
				classes | 
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 
	https://api.github.com/repos/huggingface/datasets/issues/7680 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7680/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7680/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7680/events | 
	https://github.com/huggingface/datasets/issues/7680 | 3,224,824,151 | 
	I_kwDODunzps7ANulX | 7,680 | 
	Question about iterable dataset and streaming | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/73541181?v=4",
  "events_url": "https://api.github.com/users/Tavish9/events{/privacy}",
  "followers_url": "https://api.github.com/users/Tavish9/followers",
  "following_url": "https://api.github.com/users/Tavish9/following{/other_user}",
  "gists_url": "https://api.github.com/users/Tavish9/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/Tavish9",
  "id": 73541181,
  "login": "Tavish9",
  "node_id": "MDQ6VXNlcjczNTQxMTgx",
  "organizations_url": "https://api.github.com/users/Tavish9/orgs",
  "received_events_url": "https://api.github.com/users/Tavish9/received_events",
  "repos_url": "https://api.github.com/users/Tavish9/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/Tavish9/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/Tavish9/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/Tavish9",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-07-12T04:48:30 | 2025-07-12T04:48:30 | null | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78
I am confused, 
1. If we have already loaded the dataset, why doing `to_iterable_dataset`?  Does it go through the dataset faster than map-style dataset?
2. `load_dataset(streaming=True)` is useful for huge dataset, but the speed is slow. How to make it comparable to `to_iterable_dataset` without loading the whole dataset into RAM? | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7680/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7680/timeline | null | null | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7679 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7679/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7679/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7679/events | 
	https://github.com/huggingface/datasets/issues/7679 | 3,220,787,371 | 
	I_kwDODunzps6_-VCr | 7,679 | 
	metric glue breaks with 4.0.0 | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
  "events_url": "https://api.github.com/users/stas00/events{/privacy}",
  "followers_url": "https://api.github.com/users/stas00/followers",
  "following_url": "https://api.github.com/users/stas00/following{/other_user}",
  "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/stas00",
  "id": 10676103,
  "login": "stas00",
  "node_id": "MDQ6VXNlcjEwNjc2MTAz",
  "organizations_url": "https://api.github.com/users/stas00/orgs",
  "received_events_url": "https://api.github.com/users/stas00/received_events",
  "repos_url": "https://api.github.com/users/stas00/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/stas00",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "I released `evaluate` 0.4.5 yesterday to fix the issue - sorry for the inconvenience:\n\n```\npip install -U evaluate\n```",
  "Thanks so much, @lhoestq!"
] | 2025-07-10T21:39:50 | 2025-07-11T17:42:01 | 2025-07-11T17:42:01 | 
	CONTRIBUTOR | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	### Describe the bug
worked fine with 3.6.0, and with 4.0.0 `eval_metric = metric.compute()` in HF Accelerate breaks.
The code that fails is:
https://huggingface.co/spaces/evaluate-metric/glue/blob/v0.4.0/glue.py#L84
```
def simple_accuracy(preds, labels):
    print(preds, labels)
    print(f"{preds==labels}")
    return float((preds == labels).mean())
```
data:
```
Column([1, 0, 0, 1, 1]) Column([1, 0, 0, 1, 0])
False
```
```
[rank0]:     return float((preds == labels).mean())
[rank0]:                  ^^^^^^^^^^^^^^^^^^^^^^
[rank0]: AttributeError: 'bool' object has no attribute 'mean'
```
Some behavior has changed in this new major release of `datasets` and requires updating HF accelerate and perhaps the glue metric code, all belong to HF.
### Environment info
datasets=4.0.0 | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
  "events_url": "https://api.github.com/users/stas00/events{/privacy}",
  "followers_url": "https://api.github.com/users/stas00/followers",
  "following_url": "https://api.github.com/users/stas00/following{/other_user}",
  "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/stas00",
  "id": 10676103,
  "login": "stas00",
  "node_id": "MDQ6VXNlcjEwNjc2MTAz",
  "organizations_url": "https://api.github.com/users/stas00/orgs",
  "received_events_url": "https://api.github.com/users/stas00/received_events",
  "repos_url": "https://api.github.com/users/stas00/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/stas00",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7679/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7679/timeline | null | 
	completed | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7678 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7678/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7678/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7678/events | 
	https://github.com/huggingface/datasets/issues/7678 | 3,218,625,544 | 
	I_kwDODunzps6_2FQI | 7,678 | 
	To support decoding audio data, please install 'torchcodec'. | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/48163702?v=4",
  "events_url": "https://api.github.com/users/alpcansoydas/events{/privacy}",
  "followers_url": "https://api.github.com/users/alpcansoydas/followers",
  "following_url": "https://api.github.com/users/alpcansoydas/following{/other_user}",
  "gists_url": "https://api.github.com/users/alpcansoydas/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/alpcansoydas",
  "id": 48163702,
  "login": "alpcansoydas",
  "node_id": "MDQ6VXNlcjQ4MTYzNzAy",
  "organizations_url": "https://api.github.com/users/alpcansoydas/orgs",
  "received_events_url": "https://api.github.com/users/alpcansoydas/received_events",
  "repos_url": "https://api.github.com/users/alpcansoydas/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/alpcansoydas/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/alpcansoydas/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/alpcansoydas",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "Hi ! yes you should `!pip install -U datasets[audio]` to have the required dependencies.\n\n`datasets` 4.0 now relies on `torchcodec` for audio decoding. The `torchcodec` AudioDecoder enables streaming from HF and also allows to decode ranges of audio"
] | 2025-07-10T09:43:13 | 2025-07-11T05:05:42 | 2025-07-11T05:05:42 | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	
In the latest version of datasets==4.0.0, i cannot print the audio data on the Colab notebook. But it works on the 3.6.0 version.
!pip install -q -U datasets huggingface_hub fsspec
from datasets import load_dataset
downloaded_dataset = load_dataset("ymoslem/MediaSpeech", "tr", split="train")
print(downloaded_dataset["audio"][0])
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
[/tmp/ipython-input-4-90623240.py](https://localhost:8080/#) in <cell line: 0>()
----> 1 downloaded_dataset["audio"][0]
10 frames
[/usr/local/lib/python3.11/dist-packages/datasets/features/audio.py](https://localhost:8080/#) in decode_example(self, value, token_per_repo_id)
    170             from ._torchcodec import AudioDecoder
    171         else:
--> 172             raise ImportError("To support decoding audio data, please install 'torchcodec'.")
    173 
    174         if not self.decode:
ImportError: To support decoding audio data, please install 'torchcodec'. 
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python version: 3.11.13
- `huggingface_hub` version: 0.33.2
- PyArrow version: 18.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2025.3.0 | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/48163702?v=4",
  "events_url": "https://api.github.com/users/alpcansoydas/events{/privacy}",
  "followers_url": "https://api.github.com/users/alpcansoydas/followers",
  "following_url": "https://api.github.com/users/alpcansoydas/following{/other_user}",
  "gists_url": "https://api.github.com/users/alpcansoydas/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/alpcansoydas",
  "id": 48163702,
  "login": "alpcansoydas",
  "node_id": "MDQ6VXNlcjQ4MTYzNzAy",
  "organizations_url": "https://api.github.com/users/alpcansoydas/orgs",
  "received_events_url": "https://api.github.com/users/alpcansoydas/received_events",
  "repos_url": "https://api.github.com/users/alpcansoydas/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/alpcansoydas/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/alpcansoydas/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/alpcansoydas",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7678/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7678/timeline | null | 
	completed | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7677 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7677/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7677/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7677/events | 
	https://github.com/huggingface/datasets/issues/7677 | 3,218,044,656 | 
	I_kwDODunzps6_z3bw | 7,677 | 
	Toxicity fails with datasets 4.0.0 | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/82044803?v=4",
  "events_url": "https://api.github.com/users/serena-ruan/events{/privacy}",
  "followers_url": "https://api.github.com/users/serena-ruan/followers",
  "following_url": "https://api.github.com/users/serena-ruan/following{/other_user}",
  "gists_url": "https://api.github.com/users/serena-ruan/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/serena-ruan",
  "id": 82044803,
  "login": "serena-ruan",
  "node_id": "MDQ6VXNlcjgyMDQ0ODAz",
  "organizations_url": "https://api.github.com/users/serena-ruan/orgs",
  "received_events_url": "https://api.github.com/users/serena-ruan/received_events",
  "repos_url": "https://api.github.com/users/serena-ruan/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/serena-ruan/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/serena-ruan/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/serena-ruan",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "Hi ! You can fix this by upgrading `evaluate`:\n\n```\npip install -U evaluate\n```",
  "Thanks, verified evaluate 0.4.5 works!"
] | 2025-07-10T06:15:22 | 2025-07-11T04:40:59 | 2025-07-11T04:40:59 | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	### Describe the bug
With the latest 4.0.0 release, huggingface toxicity evaluation module fails with error: `ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).`
### Steps to reproduce the bug
Repro:
```
>>> toxicity.compute(predictions=["This is a response"])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/evaluate/module.py", line 467, in compute
    output = self._compute(**inputs, **compute_kwargs)
  File "/Users/serena.ruan/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-measurement--toxicity/2390290fa0bf6d78480143547c6b08f3d4f8805b249df8c7a8e80d0ce8e3778b/toxicity.py", line 135, in _compute
    scores = toxicity(predictions, self.toxic_classifier, toxic_label)
  File "/Users/serena.ruan/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-measurement--toxicity/2390290fa0bf6d78480143547c6b08f3d4f8805b249df8c7a8e80d0ce8e3778b/toxicity.py", line 103, in toxicity
    for pred_toxic in toxic_classifier(preds):
  File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/text_classification.py", line 159, in __call__
    result = super().__call__(*inputs, **kwargs)
  File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1431, in __call__
    return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
  File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1437, in run_single
    model_inputs = self.preprocess(inputs, **preprocess_params)
  File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/text_classification.py", line 183, in preprocess
    return self.tokenizer(inputs, return_tensors=return_tensors, **tokenizer_kwargs)
  File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2867, in __call__
    encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
  File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2927, in _call_one
    raise ValueError(
ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).
```
### Expected behavior
This works before 4.0.0 release
### Environment info
- `datasets` version: 4.0.0
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.10.16
- `huggingface_hub` version: 0.33.0
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0 | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/82044803?v=4",
  "events_url": "https://api.github.com/users/serena-ruan/events{/privacy}",
  "followers_url": "https://api.github.com/users/serena-ruan/followers",
  "following_url": "https://api.github.com/users/serena-ruan/following{/other_user}",
  "gists_url": "https://api.github.com/users/serena-ruan/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/serena-ruan",
  "id": 82044803,
  "login": "serena-ruan",
  "node_id": "MDQ6VXNlcjgyMDQ0ODAz",
  "organizations_url": "https://api.github.com/users/serena-ruan/orgs",
  "received_events_url": "https://api.github.com/users/serena-ruan/received_events",
  "repos_url": "https://api.github.com/users/serena-ruan/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/serena-ruan/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/serena-ruan/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/serena-ruan",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7677/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7677/timeline | null | 
	completed | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7676 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7676/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7676/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7676/events | 
	https://github.com/huggingface/datasets/issues/7676 | 3,216,857,559 | 
	I_kwDODunzps6_vVnX | 7,676 | 
	Many things broken since the new 4.0.0 release | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/37179323?v=4",
  "events_url": "https://api.github.com/users/mobicham/events{/privacy}",
  "followers_url": "https://api.github.com/users/mobicham/followers",
  "following_url": "https://api.github.com/users/mobicham/following{/other_user}",
  "gists_url": "https://api.github.com/users/mobicham/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/mobicham",
  "id": 37179323,
  "login": "mobicham",
  "node_id": "MDQ6VXNlcjM3MTc5MzIz",
  "organizations_url": "https://api.github.com/users/mobicham/orgs",
  "received_events_url": "https://api.github.com/users/mobicham/received_events",
  "repos_url": "https://api.github.com/users/mobicham/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/mobicham/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/mobicham/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/mobicham",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "Happy to take a look, do you have a list of impacted datasets ?",
  "Thanks @lhoestq , related to lm-eval, at least `winogrande`, `mmlu` and `hellaswag`, based on my tests yesterday. But many others like <a href=\"https://huggingface.co/datasets/lukaemon/bbh\">bbh</a>, most probably others too. ",
  "Hi @mobicham ,\n\nI was having the same issue `ValueError: Feature type 'List' not found` yesterday, when I tried to load my dataset using the `load_dataset()` function.\nBy updating to `4.0.0`, I don't see this error anymore.\n\np.s. I used `Sequence` in replace of list when building my dataset (see below)\n```\nfeatures = Features({\n    ...\n    \"objects\": Sequence({\n        \"id\": Value(\"int64\"),\n        \"bbox\": Sequence(Value(\"float32\"), length=4),\n        \"category\": Value(\"string\")\n    }),\n    ...\n})\ndataset = Dataset.from_dict(data_dict)\ndataset = dataset.cast(features)\n\n``` \n",
  "The issue comes from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train), [allenai/winogrande](https://huggingface.co/datasets/allenai/winogrande), [lukaemon/bbh](https://huggingface.co/datasets/lukaemon/bbh) and [Rowan/hellaswag](https://huggingface.co/datasets/Rowan/hellaswag) which are all unsupported in `datasets` 4.0 since they are based on python scripts. Fortunately there are PRs to fix those datasets (I did some of them a year ago but dataset authors haven't merged yet... will have to ping people again about it and update here):\n\n- https://huggingface.co/datasets/hails/mmlu_no_train/discussions/2 ⚙️ \n- https://huggingface.co/datasets/allenai/winogrande/discussions/6 merged ! ✅ \n- https://huggingface.co/datasets/Rowan/hellaswag/discussions/7 merged ! ✅ \n- https://huggingface.co/datasets/lukaemon/bbh/discussions/2 merged ! ✅ ",
  "Thank you very much @lhoestq , I will try next week 👍 ",
  "I get this error when using datasets 3.5.1 to load a dataset saved with datasets 4.0.0. If you are hitting this issue, make sure that both dataset saving code and the loading code are <4.0.0 or >=4.0.0.",
  "This broke several lm-eval-harness workflows for me and reverting to older versions of datasets is not fixing the issue, does anyone have a workaround?",
  "> I get this error when using datasets 3.5.1 to load a dataset saved with datasets 4.0.0. If you are hitting this issue, make sure that both dataset saving code and the loading code are <4.0.0 or >=4.0.0.\n\n`datasets` 4.0 can load datasets saved using any older version. But the other way around is not always true: if you save a dataset with `datasets` 4.0 it may use the new `List` type that requires 4.0 and raise `ValueError: Feature type 'List' not found.`\n\nHowever issues with lm eval harness seem to come from another issue: unsupported dataset scripts (see https://github.com/huggingface/datasets/issues/7676#issuecomment-3057550659)\n\n> This broke several lm-eval-harness workflows for me and reverting to older versions of datasets is not fixing the issue, does anyone have a workaround?\n\nwhen reverting to an old `datasets` version I'd encourage you to clear your cache (by default it is located at `~/.cache/huggingface/datasets`) otherwise it might try to load a `List` type that didn't exist in old versions"
] | 2025-07-09T18:59:50 | 2025-07-11T19:57:07 | null | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	### Describe the bug
The new changes in 4.0.0 are breaking many datasets, including those from lm-evaluation-harness. 
I am trying to revert back to older versions, like 3.6.0 to make the eval work but I keep getting:
``` Python
File /venv/main/lib/python3.12/site-packages/datasets/features/features.py:1474, in generate_from_dict(obj)
   1471 class_type = _FEATURE_TYPES.get(_type, None) or globals().get(_type, None)
   1473 if class_type is None:
-> 1474     raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
   1476 if class_type == LargeList:
   1477     feature = obj.pop("feature")
ValueError: Feature type 'List' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
```
### Steps to reproduce the bug
``` Python
import lm_eval
model_eval = lm_eval.models.huggingface.HFLM(pretrained=model, tokenizer=tokenizer)
lm_eval.evaluator.simple_evaluate(model_eval, tasks=["winogrande"], num_fewshot=5, batch_size=1)
```
### Expected behavior
Older `datasets` versions should work just fine as before
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.8.0-60-generic-x86_64-with-glibc2.39
- Python version: 3.12.11
- `huggingface_hub` version: 0.33.1
- PyArrow version: 20.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.3.0
 | null | 
	{
  "+1": 9,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 9,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7676/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7676/timeline | null | null | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7675 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7675/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7675/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7675/events | 
	https://github.com/huggingface/datasets/issues/7675 | 3,216,699,094 | 
	I_kwDODunzps6_uu7W | 7,675 | 
	common_voice_11_0.py failure in dataset library | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/98793855?v=4",
  "events_url": "https://api.github.com/users/egegurel/events{/privacy}",
  "followers_url": "https://api.github.com/users/egegurel/followers",
  "following_url": "https://api.github.com/users/egegurel/following{/other_user}",
  "gists_url": "https://api.github.com/users/egegurel/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/egegurel",
  "id": 98793855,
  "login": "egegurel",
  "node_id": "U_kgDOBeN5fw",
  "organizations_url": "https://api.github.com/users/egegurel/orgs",
  "received_events_url": "https://api.github.com/users/egegurel/received_events",
  "repos_url": "https://api.github.com/users/egegurel/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/egegurel/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/egegurel/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/egegurel",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "Hi ! This dataset is not in a supported format and `datasets` 4 doesn't support datasets that based on python scripts which are often source of errors. Feel free to ask the dataset authors to convert the dataset to a supported format at https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/discussions, e.g. parquet.\n\nIn the meantime you can pin old versions of `datasets` like `datasets==3.6.0`"
] | 2025-07-09T17:47:59 | 2025-07-10T14:49:43 | null | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	### Describe the bug
I tried to download dataset but have got this error:
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True) 
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[8], line 4
      1 from datasets import load_dataset
----> 4 load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True)
File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1392, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
   1387 verification_mode = VerificationMode(
   1388     (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
   1389 )
   1391 # Create a dataset builder
-> 1392 builder_instance = load_dataset_builder(
   1393     path=path,
   1394     name=name,
   1395     data_dir=data_dir,
   1396     data_files=data_files,
   1397     cache_dir=cache_dir,
   1398     features=features,
   1399     download_config=download_config,
   1400     download_mode=download_mode,
   1401     revision=revision,
   1402     token=token,
   1403     storage_options=storage_options,
   1404     **config_kwargs,
   1405 )
   1407 # Return iterable dataset in case of streaming
   1408 if streaming:
File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1132, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, **config_kwargs)
   1130 if features is not None:
   1131     features = _fix_for_backward_compatible_features(features)
-> 1132 dataset_module = dataset_module_factory(
   1133     path,
   1134     revision=revision,
   1135     download_config=download_config,
   1136     download_mode=download_mode,
   1137     data_dir=data_dir,
   1138     data_files=data_files,
   1139     cache_dir=cache_dir,
   1140 )
   1141 # Get dataset builder class
   1142 builder_kwargs = dataset_module.builder_kwargs
File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1031, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
   1026             if isinstance(e1, FileNotFoundError):
   1027                 raise FileNotFoundError(
   1028                     f"Couldn't find any data file at {relative_to_absolute_path(path)}. "
   1029                     f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
   1030                 ) from None
-> 1031             raise e1 from None
   1032 else:
   1033     raise FileNotFoundError(f"Couldn't find any data file at {relative_to_absolute_path(path)}.")
File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:989, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
    981 try:
    982     api.hf_hub_download(
    983         repo_id=path,
    984         filename=filename,
   (...)
    987         proxies=download_config.proxies,
    988     )
--> 989     raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
    990 except EntryNotFoundError:
    991     # Use the infos from the parquet export except in some cases:
    992     if data_dir or data_files or (revision and revision != "main"):
RuntimeError: Dataset scripts are no longer supported, but found common_voice_11_0.py
### Steps to reproduce the bug
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True)
### Expected behavior
its supposed to download this dataset.
### Environment info
Python 3.12 , Windows 11  | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7675/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7675/timeline | null | null | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7674 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7674/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7674/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7674/events | 
	https://github.com/huggingface/datasets/pull/7674 | 3,216,251,069 | 
	PR_kwDODunzps6eJGo5 | 7,674 | 
	set dev version | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7674). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-09T15:01:25 | 2025-07-09T15:04:01 | 2025-07-09T15:01:33 | 
	MEMBER | null | null | null | null | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7674/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7674/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7674.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7674",
  "merged_at": "2025-07-09T15:01:33Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/7674.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7674"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7673 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7673/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7673/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7673/events | 
	https://github.com/huggingface/datasets/pull/7673 | 3,216,075,633 | 
	PR_kwDODunzps6eIgj- | 7,673 | 
	Release: 4.0.0 | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7673). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-09T14:03:16 | 2025-07-09T14:36:19 | 2025-07-09T14:36:18 | 
	MEMBER | null | null | null | null | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7673/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7673/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7673.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7673",
  "merged_at": "2025-07-09T14:36:18Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/7673.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7673"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7672 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7672/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7672/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7672/events | 
	https://github.com/huggingface/datasets/pull/7672 | 3,215,287,164 | 
	PR_kwDODunzps6eF1vj | 7,672 | 
	Fix double sequence | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7672). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-09T09:53:39 | 2025-07-09T09:56:29 | 2025-07-09T09:56:28 | 
	MEMBER | null | null | null | 
	```python
>>> Features({"a": Sequence(Sequence({"c": Value("int64")}))})
{'a': List({'c': List(Value('int64'))})}
```
instead of `{'a': {'c': List(List(Value('int64')))}}` | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7672/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7672/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7672.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7672",
  "merged_at": "2025-07-09T09:56:27Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/7672.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7672"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7671 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7671/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7671/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7671/events | 
	https://github.com/huggingface/datasets/issues/7671 | 3,213,223,886 | 
	I_kwDODunzps6_hefO | 7,671 | 
	Mapping function not working if the first example is returned as None | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/46325823?v=4",
  "events_url": "https://api.github.com/users/dnaihao/events{/privacy}",
  "followers_url": "https://api.github.com/users/dnaihao/followers",
  "following_url": "https://api.github.com/users/dnaihao/following{/other_user}",
  "gists_url": "https://api.github.com/users/dnaihao/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/dnaihao",
  "id": 46325823,
  "login": "dnaihao",
  "node_id": "MDQ6VXNlcjQ2MzI1ODIz",
  "organizations_url": "https://api.github.com/users/dnaihao/orgs",
  "received_events_url": "https://api.github.com/users/dnaihao/received_events",
  "repos_url": "https://api.github.com/users/dnaihao/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/dnaihao/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/dnaihao/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/dnaihao",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "Hi, map() always expect an output.\n\nIf you wish to filter examples, you should use filter(), in your case it could be something like this:\n\n```python\nds = ds.map(my_processing_function).filter(ignore_long_prompts)\n```",
  "Realized this! Thanks a lot, I will close this issue then."
] | 2025-07-08T17:07:47 | 2025-07-09T12:30:32 | 2025-07-09T12:30:32 | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	### Describe the bug
https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37
Here we can see the writer is initialized on `i==0`. However, there can be cases where in the user mapping function, the first example is filtered out (length constraints, etc).
In this case, the writer would be a `None` type and the code will report `NoneType has no write function`.
A simple fix is available, simply change line 3652 from `if i == 0:` to `if writer is None:`
### Steps to reproduce the bug
Prepare a dataset
have this function
```
import datasets
def make_map_fn(split, max_prompt_tokens=3):
    def process_fn(example, idx):
        question = example['question']
        reasoning_steps = example['reasoning_steps']
        label = example['label']
        answer_format = ""
        for i in range(len(reasoning_steps)):
            system_message = "Dummy"
        all_steps_formatted = []
        content = f"""Dummy"""
        prompt = [
            {"role": "system", "content": system_message},
            {"role": "user", "content": content},
        ]
        tokenized = tokenizer.apply_chat_template(prompt, return_tensors="pt", truncation=False)
        if tokenized.shape[1] > max_prompt_tokens:
            return None  # skip overly long examples
        data = {
            "dummy": "dummy"
        }
        return data
    return process_fn
...
# load your dataset
...
train = train.map(function=make_map_fn('train'), with_indices=True)
```
### Expected behavior
The dataset mapping shall behave even when the first example is filtered out.
### Environment info
I am using `datasets==3.6.0` but I have observed this issue in the github repo too: https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37 | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/46325823?v=4",
  "events_url": "https://api.github.com/users/dnaihao/events{/privacy}",
  "followers_url": "https://api.github.com/users/dnaihao/followers",
  "following_url": "https://api.github.com/users/dnaihao/following{/other_user}",
  "gists_url": "https://api.github.com/users/dnaihao/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/dnaihao",
  "id": 46325823,
  "login": "dnaihao",
  "node_id": "MDQ6VXNlcjQ2MzI1ODIz",
  "organizations_url": "https://api.github.com/users/dnaihao/orgs",
  "received_events_url": "https://api.github.com/users/dnaihao/received_events",
  "repos_url": "https://api.github.com/users/dnaihao/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/dnaihao/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/dnaihao/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/dnaihao",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7671/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7671/timeline | null | 
	completed | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7670 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7670/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7670/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7670/events | 
	https://github.com/huggingface/datasets/pull/7670 | 3,208,962,372 | 
	PR_kwDODunzps6dwgOc | 7,670 | 
	Fix audio bytes | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7670). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-07T13:05:15 | 2025-07-07T13:07:47 | 2025-07-07T13:05:33 | 
	MEMBER | null | null | null | null | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7670/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7670/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7670.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7670",
  "merged_at": "2025-07-07T13:05:33Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/7670.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7670"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7669 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7669/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7669/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7669/events | 
	https://github.com/huggingface/datasets/issues/7669 | 3,203,541,091 | 
	I_kwDODunzps6-8ihj | 7,669 | 
	How can I add my custom data to huggingface datasets | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/219205504?v=4",
  "events_url": "https://api.github.com/users/xiagod/events{/privacy}",
  "followers_url": "https://api.github.com/users/xiagod/followers",
  "following_url": "https://api.github.com/users/xiagod/following{/other_user}",
  "gists_url": "https://api.github.com/users/xiagod/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/xiagod",
  "id": 219205504,
  "login": "xiagod",
  "node_id": "U_kgDODRDPgA",
  "organizations_url": "https://api.github.com/users/xiagod/orgs",
  "received_events_url": "https://api.github.com/users/xiagod/received_events",
  "repos_url": "https://api.github.com/users/xiagod/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/xiagod/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/xiagod/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/xiagod",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "Hey @xiagod \n\nThe easiest way to add your custom data to Hugging Face Datasets is to use the built-in load_dataset function with your local files. Some examples include:\n\nCSV files:\nfrom datasets import load_dataset\ndataset = load_dataset(\"csv\", data_files=\"my_file.csv\")\n\nJSON or JSONL files:\nfrom datasets import load_dataset\ndataset = load_dataset(\"json\", data_files=\"my_file.json\")\n\n\nImages stored in folders (e.g. data/train/cat/, data/train/dog/):\nfrom datasets import load_dataset\ndataset = load_dataset(\"imagefolder\", data_dir=\"/path/to/pokemon\")\n\n\nThese methods let you quickly create a custom dataset without needing to write a full script.\n\nMore information can be found in Hugging Face's tutorial \"Create a dataset\" or \"Load\" documentation here: \n\nhttps://huggingface.co/docs/datasets/create_dataset \n\nhttps://huggingface.co/docs/datasets/loading#local-and-remote-files\n\n\n\nIf you want to submit your dataset to the Hugging Face Datasets GitHub repo so others can load it follow this guide: \n\nhttps://huggingface.co/docs/datasets/upload_dataset \n\n\n"
] | 2025-07-04T19:19:54 | 2025-07-05T18:19:37 | null | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that. | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7669/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7669/timeline | null | null | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7668 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7668/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7668/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7668/events | 
	https://github.com/huggingface/datasets/issues/7668 | 3,199,039,322 | 
	I_kwDODunzps6-rXda | 7,668 | 
	Broken EXIF crash the whole program | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/30485844?v=4",
  "events_url": "https://api.github.com/users/Seas0/events{/privacy}",
  "followers_url": "https://api.github.com/users/Seas0/followers",
  "following_url": "https://api.github.com/users/Seas0/following{/other_user}",
  "gists_url": "https://api.github.com/users/Seas0/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/Seas0",
  "id": 30485844,
  "login": "Seas0",
  "node_id": "MDQ6VXNlcjMwNDg1ODQ0",
  "organizations_url": "https://api.github.com/users/Seas0/orgs",
  "received_events_url": "https://api.github.com/users/Seas0/received_events",
  "repos_url": "https://api.github.com/users/Seas0/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/Seas0/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/Seas0/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/Seas0",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "There are other discussions about error handling for images decoding here : https://github.com/huggingface/datasets/issues/7632 https://github.com/huggingface/datasets/issues/7612\n\nand a PR here: https://github.com/huggingface/datasets/pull/7638 (would love your input on the proposed solution !)"
] | 2025-07-03T11:24:15 | 2025-07-03T12:27:16 | null | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	### Describe the bug
When parsing this image in the ImageNet1K dataset, the `datasets` crashs whole training process just because unable to parse an invalid EXIF tag.

### Steps to reproduce the bug
Use the `datasets.Image.decode_example` method to decode the aforementioned image could reproduce the bug.
The decoding function will throw an unhandled exception at the `image.getexif()` method call due to invalid utf-8 stream in EXIF tags.
```
File lib/python3.12/site-packages/datasets/features/image.py:188, in Image.decode_example(self, value, token_per_repo_id)
    186     image = PIL.Image.open(BytesIO(bytes_))
    187 image.load()  # to avoid "Too many open files" errors
--> 188 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None:
    189     image = PIL.ImageOps.exif_transpose(image)
    190 if self.mode and self.mode != image.mode:
File lib/python3.12/site-packages/PIL/Image.py:1542, in Image.getexif(self)
   1540 xmp_tags = self.info.get("XML:com.adobe.xmp")
   1541 if not xmp_tags and (xmp_tags := self.info.get("xmp")):
-> 1542     xmp_tags = xmp_tags.decode("utf-8")
   1543 if xmp_tags:
   1544     match = re.search(r'tiff:Orientation(="|>)([0-9])', xmp_tags)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 4312: invalid start byte
```
### Expected behavior
The invalid EXIF tag should simply be ignored or issue a warning message, instead of crash the whole program at once.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.33.0
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2025.3.0 | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7668/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7668/timeline | null | null | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7667 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7667/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7667/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7667/events | 
	https://github.com/huggingface/datasets/pull/7667 | 3,196,251,707 | 
	PR_kwDODunzps6dGmm8 | 7,667 | 
	Fix infer list of images | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7667). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-02T15:07:58 | 2025-07-02T15:10:28 | 2025-07-02T15:08:03 | 
	MEMBER | null | null | null | 
	cc @kashif  | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7667/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7667/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7667.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7667",
  "merged_at": "2025-07-02T15:08:03Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/7667.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7667"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7666 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7666/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7666/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7666/events | 
	https://github.com/huggingface/datasets/pull/7666 | 3,196,220,722 | 
	PR_kwDODunzps6dGf7E | 7,666 | 
	Backward compat list feature | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7666). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-02T14:58:00 | 2025-07-02T15:00:37 | 2025-07-02T14:59:40 | 
	MEMBER | null | null | null | 
	cc @kashif  | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 1,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 1,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7666/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7666/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7666.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7666",
  "merged_at": "2025-07-02T14:59:40Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/7666.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7666"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7665 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7665/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7665/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7665/events | 
	https://github.com/huggingface/datasets/issues/7665 | 3,193,239,955 | 
	I_kwDODunzps6-VPmT | 7,665 | 
	Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4",
  "events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}",
  "followers_url": "https://api.github.com/users/zdzichukowalski/followers",
  "following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}",
  "gists_url": "https://api.github.com/users/zdzichukowalski/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/zdzichukowalski",
  "id": 1151198,
  "login": "zdzichukowalski",
  "node_id": "MDQ6VXNlcjExNTExOTg=",
  "organizations_url": "https://api.github.com/users/zdzichukowalski/orgs",
  "received_events_url": "https://api.github.com/users/zdzichukowalski/received_events",
  "repos_url": "https://api.github.com/users/zdzichukowalski/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/zdzichukowalski/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/zdzichukowalski/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/zdzichukowalski",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "Somehow I created the issue twice🙈 This one is an exact duplicate of #7664."
] | 2025-07-01T17:14:53 | 2025-07-01T17:17:48 | 2025-07-01T17:17:48 | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	### Describe the bug
When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field  as if it were part of the dataset schema. 
In my case there is a field `body:` with a string value 
```
"### Describe the bug (...) ,action: string, datetime: timestamp[s], author: string, (...) Pandas version: 1.3.4"
```
As a result, I got an exception 
```
"TypeError: Couldn't cast array of type timestamp[s] to null". 
```
Full stack-trace in the attached file below.
I also attach a minimized dataset (data.json, a single entry) that reproduces the error.
**Observations**(on the minimal example): 
- if I remove _all fields before_ `body`, a different error appears,
- if I remove _all fields after_ `body`, yet another error appears,
- if `body` is _the only field_, the error disappears.
So this might be one complex bug or several edge cases interacting. I haven’t dug deeper.
Also changing the file extension to `.json` or `.txt` avoids the problem. This suggests **a possible workaround** for the general case: convert `.jsonl` to `.json`. Though I haven’t verified correctness of that workaround yet.
Anyway my understanding is that `load_dataset` with first argument set to "json" should properly handle `.jsonl` files. Correct me if I'm wrong.
[stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt)
[data.json](https://github.com/user-attachments/files/21004164/data.json)
P.S.
I discovered this while going through the HuggingFace tutorial. Specifically [this part](https://huggingface.co/learn/llm-course/chapter5/5?fw=pt).I will try to inform the tutorial team about this issue, as it can be a showstopper for young 🤗 adepts.
### Steps to reproduce the bug
1. Download attached [data.json](https://github.com/user-attachments/files/21004164/data.json) file.
2. Run the following code which should work correctly:
```
from datasets import load_dataset
load_dataset("json", data_files="data.json", split="train")
```
3. Change extension of the `data` file to `.jsonl` and run:
```
from datasets import load_dataset
load_dataset("json", data_files="data.jsonl", split="train")
```
This will trigger an error like the one in the attached [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt).
One can also try removing fields before the `body` field and after it. These actions give different errors.
### Expected behavior
Parsing data in `.jsonl` format should yield the same result as parsing the same data in `.json` format. In any case, the content of a string field should never be interpreted as part of the dataset schema.
### Environment info
datasets version: _3.6.0_
pyarrow version: _20.0.0_
Python version: _3.11.9_
platform version: _macOS-15.5-arm64-arm-64bit_ | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4",
  "events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}",
  "followers_url": "https://api.github.com/users/zdzichukowalski/followers",
  "following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}",
  "gists_url": "https://api.github.com/users/zdzichukowalski/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/zdzichukowalski",
  "id": 1151198,
  "login": "zdzichukowalski",
  "node_id": "MDQ6VXNlcjExNTExOTg=",
  "organizations_url": "https://api.github.com/users/zdzichukowalski/orgs",
  "received_events_url": "https://api.github.com/users/zdzichukowalski/received_events",
  "repos_url": "https://api.github.com/users/zdzichukowalski/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/zdzichukowalski/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/zdzichukowalski/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/zdzichukowalski",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7665/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7665/timeline | null | 
	duplicate | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7664 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7664/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7664/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7664/events | 
	https://github.com/huggingface/datasets/issues/7664 | 3,193,239,035 | 
	I_kwDODunzps6-VPX7 | 7,664 | 
	Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4",
  "events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}",
  "followers_url": "https://api.github.com/users/zdzichukowalski/followers",
  "following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}",
  "gists_url": "https://api.github.com/users/zdzichukowalski/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/zdzichukowalski",
  "id": 1151198,
  "login": "zdzichukowalski",
  "node_id": "MDQ6VXNlcjExNTExOTg=",
  "organizations_url": "https://api.github.com/users/zdzichukowalski/orgs",
  "received_events_url": "https://api.github.com/users/zdzichukowalski/received_events",
  "repos_url": "https://api.github.com/users/zdzichukowalski/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/zdzichukowalski/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/zdzichukowalski/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/zdzichukowalski",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "Hey @zdzichukowalski, I was not able to reproduce this on python 3.11.9 and datasets 3.6.0. The contents of \"body\" are correctly parsed as a string and no other fields like timestamps are created. Could you try reproducing this in a fresh environment, or posting the complete code where you encountered that stacktrace? (I noticed in the stacktrace you had a bigger program, perhaps there are some side effects)",
  "Hi @zdzichukowalski, thanks for reporting this!\n\nTo help investigate this further, could you please share the following:\n\nExact contents of the data.jsonl file you're using — especially the first few lines that trigger the error.\n\nThe full code snippet you used to run load_dataset(), along with any environment setup (if not already shared).\n\nCan you confirm whether the issue persists when running in a clean virtual environment (e.g., with only datasets, pyarrow, and their dependencies)?\n\nIf possible, could you try running the same with an explicit features schema, like:\n\n```\nfrom datasets import load_dataset, Features, Value\nfeatures = Features({\"body\": Value(\"string\")})\nds = load_dataset(\"json\", data_files=\"data.jsonl\", split=\"train\", features=features)\n```\nAlso, just to clarify — does the \"body\" field contain plain string content, or is it sometimes being parsed from multi-line or structured inputs (like embedded JSON or CSV-like text)?\n\nOnce we have this info, we can check whether this is a schema inference issue, a PyArrow type coercion bug, or something else.",
  "Ok I can confirm that I also cannot reproduce the error in a clean environment with the minimized version of the dataset that I provided. Same story for the old environment. Nonetheless the bug still happens in the new environment with the full version of the dataset, which I am providing now. Please let me know if now you can reproduce the problem.\n\nAdditionally I'm attaching result of the `pip freeze` command.\n\n[datasets-issues.jsonl.zip](https://github.com/user-attachments/files/21081755/datasets-issues.jsonl.zip)\n[requirements.txt](https://github.com/user-attachments/files/21081776/requirements.txt)\n\n@ArjunJagdale running with explicit script gives the following stack:\n[stack_features_version.txt](https://github.com/user-attachments/files/21082056/stack_features_version.txt)\n\nThe problematic `body` field seems to be e.g. content of [this comment](https://github.com/huggingface/datasets/issues/5596#issue-1604919993) from Github in which someone provided a stack trace containing json structure ;) I would say that it is intended to be a plain string. \n\nTo find a part that triggers an error, simply search for the \"timestamp[s]\" in the dataset. There are few such entries.\n\nI think I provided all the information you asked. \n\nOh, and workaround I suggested, that is convert `.jsonl` to `.json` worked for me.\n\nP.S\n1. @itsmejul the stack trace I provided is coming from running the two-liner script that I attached. There is no bigger program, although there were some jupiter files alongside the script, which were run in the same env. I am not sure what part of the stack trace suggests that there is something more ;) \n\n2. Is it possible that on some layer in the python/env/jupiter there is some caching mechanism for files that would give false results for my minimized version of the dataset file? There is of course possibility that I made a mistake and run the script with the wrong file, but I double and triple checked things before creating an issue. Earlier I wrote that \"(...) changing the file extension to `.json` or `.txt` avoids the problem\". But with the full version this is not true(when I change to `txt`), and minimized version always works. So it looks like that when I changed the extension to e.g. `txt` then a minimized file loaded from the disk and it was parsed correctly, but every time when I changed back to `jsonl` my script must have used an original content of the file - the one before I made a minimization. But this is still all strange because I even removed the fields before and after the body from my minimized `jsonl` and there were some different errors(I mention it in my original post), so I do not get why today I cannot reproduce it in the original env... \n\n",
  "Hi @zdzichukowalski, thanks again for the detailed info and files!\n\nI’ve reviewed the `datasets-issues.jsonl` you shared, and I can now confirm the issue with full clarity:\n\nSome entries in the `\"body\"` field contain string content that resembles schema definitions — for example:\n\n```\nstruct<type: string, action: string, datetime: timestamp[s], ...>\n```\n\nThese strings appear to be copied from GitHub comments or stack traces (e.g., from #5596)\n\nWhen using the `.jsonl` format, `load_dataset()` relies on row-wise schema inference via PyArrow. If some rows contain real structured fields like `pull_request.merged_at` (a valid timestamp), and others contain schema-like text inside string fields, PyArrow can get confused while unifying the schema — leading to cast errors.\n\nThat’s why:\n\n* Using a reduced schema like `features={\"body\": Value(\"string\")}` fails — because the full table has many more fields.\n* Converting the file to `.json` (a list of objects) works — because global schema inference kicks in.\n* Filtering the dataset to only the `body` field avoids the issue entirely.\n\n### Suggested Workarounds\n\n* Convert the `.jsonl` file to `.json` to enable global schema inference.\n* Or, preprocess the `.jsonl` file to extract only the `\"body\"` field if that’s all you need.",
  "So in summary should we treat it as a low severity bug in `PyArrow`, in `Datasets` library, or as a proper behavior and do nothing with it?",
  "You are right actually! I’d also categorize this as a low-severity schema inference edge case, mainly stemming from PyArrow, but exposed by how datasets handles .jsonl inputs.\n\nIt's not a bug in datasets per se, but confusing when string fields (like body) contain text that resembles schema — e.g., \"timestamp[s]\".\n\nMaybe @lhoestq — could this be considered as a small feature/improvement?"
] | 2025-07-01T17:14:32 | 2025-07-09T13:14:11 | null | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	### Describe the bug
When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field  as if it were part of the dataset schema. 
In my case there is a field `body:` with a string value 
```
"### Describe the bug (...) ,action: string, datetime: timestamp[s], author: string, (...) Pandas version: 1.3.4"
```
As a result, I got an exception 
```
"TypeError: Couldn't cast array of type timestamp[s] to null". 
```
Full stack-trace in the attached file below.
I also attach a minimized dataset (data.json, a single entry) that reproduces the error.
**Observations**(on the minimal example): 
- if I remove _all fields before_ `body`, a different error appears,
- if I remove _all fields after_ `body`, yet another error appears,
- if `body` is _the only field_, the error disappears.
So this might be one complex bug or several edge cases interacting. I haven’t dug deeper.
Also changing the file extension to `.json` or `.txt` avoids the problem. This suggests **a possible workaround** for the general case: convert `.jsonl` to `.json`. Though I haven’t verified correctness of that workaround yet.
Anyway my understanding is that `load_dataset` with first argument set to "json" should properly handle `.jsonl` files. Correct me if I'm wrong.
[stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt)
[data.json](https://github.com/user-attachments/files/21004164/data.json)
P.S.
I discovered this while going through the HuggingFace tutorial. Specifically [this part](https://huggingface.co/learn/llm-course/chapter5/5?fw=pt). I will try to inform the tutorial team about this issue, as it can be a showstopper for young 🤗 adepts.
### Steps to reproduce the bug
1. Download attached [data.json](https://github.com/user-attachments/files/21004164/data.json) file.
2. Run the following code which should work correctly:
```
from datasets import load_dataset
load_dataset("json", data_files="data.json", split="train")
```
3. Change extension of the `data` file to `.jsonl` and run:
```
from datasets import load_dataset
load_dataset("json", data_files="data.jsonl", split="train")
```
This will trigger an error like the one in the attached [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt).
One can also try removing fields before the `body` field and after it. These actions give different errors.
### Expected behavior
Parsing data in `.jsonl` format should yield the same result as parsing the same data in `.json` format. In any case, the content of a string field should never be interpreted as part of the dataset schema.
### Environment info
datasets version: _3.6.0_
pyarrow version: _20.0.0_
Python version: _3.11.9_
platform version: _macOS-15.5-arm64-arm-64bit_ | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7664/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7664/timeline | null | null | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7663 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7663/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7663/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7663/events | 
	https://github.com/huggingface/datasets/pull/7663 | 3,192,582,371 | 
	PR_kwDODunzps6c6aJF | 7,663 | 
	Custom metadata filenames | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7663). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-01T13:50:36 | 2025-07-01T13:58:41 | 2025-07-01T13:58:39 | 
	MEMBER | null | null | null | 
	example: https://huggingface.co/datasets/lhoestq/overlapping-subsets-imagefolder/tree/main
To make multiple subsets for an imagefolder (one metadata file per subset), e.g.
```yaml
configs:
  - config_name: default
    metadata_filenames:
      - metadata.csv
  - config_name: other
    metadata_filenames:
      - metadata2.csv
``` | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 1,
  "laugh": 0,
  "rocket": 0,
  "total_count": 1,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7663/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7663/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7663.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7663",
  "merged_at": "2025-07-01T13:58:39Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/7663.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7663"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7662 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7662/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7662/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7662/events | 
	https://github.com/huggingface/datasets/issues/7662 | 3,190,805,531 | 
	I_kwDODunzps6-L9Qb | 7,662 | 
	Applying map after transform with multiprocessing will cause OOM | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/26482910?v=4",
  "events_url": "https://api.github.com/users/JunjieLl/events{/privacy}",
  "followers_url": "https://api.github.com/users/JunjieLl/followers",
  "following_url": "https://api.github.com/users/JunjieLl/following{/other_user}",
  "gists_url": "https://api.github.com/users/JunjieLl/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/JunjieLl",
  "id": 26482910,
  "login": "JunjieLl",
  "node_id": "MDQ6VXNlcjI2NDgyOTEw",
  "organizations_url": "https://api.github.com/users/JunjieLl/orgs",
  "received_events_url": "https://api.github.com/users/JunjieLl/received_events",
  "repos_url": "https://api.github.com/users/JunjieLl/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/JunjieLl/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/JunjieLl/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/JunjieLl",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "Hi ! `add_column` loads the full column data in memory:\n\nhttps://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021\n\na workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a time",
  "> Hi ! `add_column` loads the full column data in memory:\n> \n> [datasets/src/datasets/arrow_dataset.py](https://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021)\n> \n> Line 6021 in [bfa497b](/huggingface/datasets/commit/bfa497b1666f4c58bd231c440d8b92f9859f3a58)\n> \n>  column_table = InMemoryTable.from_pydict({name: column}, schema=pyarrow_schema) \n> a workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a time\n\n\nHow about cast_column,since map cannot apply type transformation, e.g. Audio(16000) to Audio(24000)",
  "cast_column calls `pyarrow.Table.cast` on the full dataset which I believe the memory usage depends on the source and target types but should be low in general\n\ncasting from Audio(16000) to Audio(24000) is cheap since the source and target arrow types are the same",
  "> cast_column calls `pyarrow.Table.cast` on the full dataset which I believe the memory usage depends on the source and target types but should be low in general\n> \n> casting from Audio(16000) to Audio(24000) is cheap since the source and target arrow types are the same\n\nThanks for replying. So the OOM is caused by add_column operation. When I skip the operation, low memory will be achieved. Right?",
  "> Hi ! `add_column` loads the full column data in memory:\n> \n> [datasets/src/datasets/arrow_dataset.py](https://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021)\n> \n> Line 6021 in [bfa497b](/huggingface/datasets/commit/bfa497b1666f4c58bd231c440d8b92f9859f3a58)\n> \n>  column_table = InMemoryTable.from_pydict({name: column}, schema=pyarrow_schema) \n> a workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a time\n\n\nNote num_process=1 would not cause OOM. I'm confused.\n\n"
] | 2025-07-01T05:45:57 | 2025-07-10T06:17:40 | null | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	### Describe the bug
I have a 30TB dataset. When I perform add_column and cast_column operations on it and then execute a multiprocessing map, it results in an OOM (Out of Memory) error. However, if I skip the add_column and cast_column steps and directly run the map, there is no OOM. After debugging step by step, I found that the OOM is caused at this point, and I suspect it’s because the add_column and cast_column operations are not cached, which causes the entire dataset to be loaded in each subprocess, leading to the OOM. The critical line of code is: https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/utils/py_utils.py#L607
Note num_process=1 would not cause OOM. I'm confused.
### Steps to reproduce the bug
For reproduce, you can load  dataset and set cache_dir (for caching): amphion/Emilia-Dataset which is a veru large datasets that RAM can not fits.
And apply the map with multiprocessing after a transform operation  (e.g. add_column, cast_column).
As long as num_process>1, it must cause OOM.
### Expected behavior
It should not cause OOM.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-5.10.134-16.101.al8.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.33.1
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2024.6.1 | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7662/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7662/timeline | null | null | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7661 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7661/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7661/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7661/events | 
	https://github.com/huggingface/datasets/pull/7661 | 3,190,408,237 | 
	PR_kwDODunzps6czBDi | 7,661 | 
	fix del tqdm lock error | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/44766273?v=4",
  "events_url": "https://api.github.com/users/Hypothesis-Z/events{/privacy}",
  "followers_url": "https://api.github.com/users/Hypothesis-Z/followers",
  "following_url": "https://api.github.com/users/Hypothesis-Z/following{/other_user}",
  "gists_url": "https://api.github.com/users/Hypothesis-Z/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/Hypothesis-Z",
  "id": 44766273,
  "login": "Hypothesis-Z",
  "node_id": "MDQ6VXNlcjQ0NzY2Mjcz",
  "organizations_url": "https://api.github.com/users/Hypothesis-Z/orgs",
  "received_events_url": "https://api.github.com/users/Hypothesis-Z/received_events",
  "repos_url": "https://api.github.com/users/Hypothesis-Z/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/Hypothesis-Z/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/Hypothesis-Z/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/Hypothesis-Z",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-07-01T02:04:02 | 2025-07-08T01:38:46 | null | 
	NONE | null | null | null | 
	fixes https://github.com/huggingface/datasets/issues/7660 | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7661/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7661/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7661.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7661",
  "merged_at": null,
  "patch_url": "https://github.com/huggingface/datasets/pull/7661.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7661"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7660 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7660/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7660/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7660/events | 
	https://github.com/huggingface/datasets/issues/7660 | 3,189,028,251 | 
	I_kwDODunzps6-FLWb | 7,660 | 
	AttributeError: type object 'tqdm' has no attribute '_lock' | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/44766273?v=4",
  "events_url": "https://api.github.com/users/Hypothesis-Z/events{/privacy}",
  "followers_url": "https://api.github.com/users/Hypothesis-Z/followers",
  "following_url": "https://api.github.com/users/Hypothesis-Z/following{/other_user}",
  "gists_url": "https://api.github.com/users/Hypothesis-Z/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/Hypothesis-Z",
  "id": 44766273,
  "login": "Hypothesis-Z",
  "node_id": "MDQ6VXNlcjQ0NzY2Mjcz",
  "organizations_url": "https://api.github.com/users/Hypothesis-Z/orgs",
  "received_events_url": "https://api.github.com/users/Hypothesis-Z/received_events",
  "repos_url": "https://api.github.com/users/Hypothesis-Z/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/Hypothesis-Z/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/Hypothesis-Z/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/Hypothesis-Z",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "Deleting a class (**not instance**) attribute might be invalid in this case, which is `tqdm` doing in `ensure_lock`.\n\n```python\nfrom tqdm import tqdm as old_tqdm\n\nclass tqdm1(old_tqdm):\n     def __delattr__(self, attr):\n        try:\n            super().__delattr__(attr)\n        except AttributeError:\n            if attr != '_lock':\n                print(attr)\n                raise\n\nclass Meta(type):\n    def __delattr__(cls, name):\n        if name == \"_lock\":\n            return  \n        return super().__delattr__(name)\n    \nclass tqdm2(old_tqdm, metaclass=Meta):\n    pass\n\ndel tqdm2._lock\ndel tqdm1._lock # error\n```\n\nhttps://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/utils/tqdm.py#L104-L122",
  "A cheaper option (seems to work in my case):  \n```python\nfrom datasets import tqdm as hf_tqdm\nhf_tqdm.set_lock(hf_tqdm.get_lock())\n```"
] | 2025-06-30T15:57:16 | 2025-07-03T15:14:27 | null | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	### Describe the bug
`AttributeError: type object 'tqdm' has no attribute '_lock'`
It occurs when I'm trying to load datasets in thread pool.
Issue https://github.com/huggingface/datasets/issues/6066 and PR https://github.com/huggingface/datasets/pull/6067 https://github.com/huggingface/datasets/pull/6068 tried to fix this. 
### Steps to reproduce the bug
Will have to try several times to reproduce the error because it is concerned with threads.
 
1. save some datasets for test
```pythonfrom datasets import Dataset, DatasetDict
import os
os.makedirs("test_dataset_shards", exist_ok=True)
for i in range(10):
    data = Dataset.from_dict({"text": [f"example {j}" for j in range(1000000)]})
    data = DatasetDict({'train': data})
    data.save_to_disk(f"test_dataset_shards/shard_{i}")
```
2. load them in a thread pool 
```python
from datasets import load_from_disk
from tqdm import tqdm
from concurrent.futures import ThreadPoolExecutor, as_completed
import glob
datas = glob.glob('test_dataset_shards/shard_*')
with ThreadPoolExecutor(max_workers=10) as pool:
    futures = [pool.submit(load_from_disk, it) for it in datas]
datas = []
for future in tqdm(as_completed(futures), total=len(futures)):
    datas.append(future.result())
```
### Expected behavior
no exception raised
### Environment info
datasets==2.19.0
python==3.10 | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7660/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7660/timeline | null | null | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7659 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7659/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7659/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7659/events | 
	https://github.com/huggingface/datasets/pull/7659 | 3,187,882,217 | 
	PR_kwDODunzps6cqkou | 7,659 | 
	Update the beans dataset link in Preprocess | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/5434867?v=4",
  "events_url": "https://api.github.com/users/HJassar/events{/privacy}",
  "followers_url": "https://api.github.com/users/HJassar/followers",
  "following_url": "https://api.github.com/users/HJassar/following{/other_user}",
  "gists_url": "https://api.github.com/users/HJassar/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/HJassar",
  "id": 5434867,
  "login": "HJassar",
  "node_id": "MDQ6VXNlcjU0MzQ4Njc=",
  "organizations_url": "https://api.github.com/users/HJassar/orgs",
  "received_events_url": "https://api.github.com/users/HJassar/received_events",
  "repos_url": "https://api.github.com/users/HJassar/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/HJassar/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/HJassar/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/HJassar",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[] | 2025-06-30T09:58:44 | 2025-07-07T08:38:19 | 2025-07-01T14:01:42 | 
	CONTRIBUTOR | null | null | null | 
	In the Preprocess tutorial, the to "the beans dataset" is incorrect. Fixed. | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7659/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7659/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7659.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7659",
  "merged_at": "2025-07-01T14:01:42Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/7659.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7659"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7658 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7658/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7658/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7658/events | 
	https://github.com/huggingface/datasets/pull/7658 | 3,187,800,504 | 
	PR_kwDODunzps6cqTMs | 7,658 | 
	Fix: Prevent loss of info.features and column_names in IterableDatasetDict.map when features is None | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/ArjunJagdale",
  "id": 142811259,
  "login": "ArjunJagdale",
  "node_id": "U_kgDOCIMgew",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/ArjunJagdale",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "Hi!\r\nI haven’t included a test for this change, as the fix is quite small and targeted.\r\nPlease let me know if you’d like a test for this case or if you’d prefer to handle it during review.\r\nThanks!",
  "we can't know in advance the `features` after map() (it transforms the data !), so you can reuse the `features` from `info.features`",
  "I'll the patch as suggested — `info.features = features` or `self.info.features` — to ensure schema preservation while keeping the logic simple and explicit. WDYT?\r\n",
  "info.features should be None in the general case, and replaced by the user's `features` if it's passed explicitly with `map(..., features=...)`\r\n\r\nhttps://github.com/huggingface/datasets/issues/7568 is not an issue we can fix",
  "> info.features should be None in the general case, and replaced by the user's `features` if it's passed explicitly with `map(..., features=...)`\r\n> \r\n> #7568 is not an issue we can fix\r\n\r\nThanks for the clarification! Totally makes sense now — I understand that features=None is the expected behavior post-map() unless explicitly passed, and that preserving old schema by default could lead to incorrect assumptions.\r\nClosing this one — appreciate the feedback as always"
] | 2025-06-30T09:31:12 | 2025-07-01T16:26:30 | 2025-07-01T16:26:12 | 
	CONTRIBUTOR | null | null | null | 
	This PR fixes a bug where calling `IterableDatasetDict.map()` or `IterableDataset.map()` with the default `features=None` argument would overwrite the existing `info.features` attribute with `None`. This, in turn, caused the resulting dataset to lose its schema, breaking downstream usage of attributes like `column_names`.
Why
Previously, the code would always set `info.features = features`, even if `features` was `None`. When mapping with removal of columns or other transformations, this led to the destruction of the schema and caused failures in code that relied on the dataset schema being present.
How
We now update `info.features` only if `features` is not `None`. This preserves the original schema unless the user explicitly provides a new one.
Reference
Fixes #7568 | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/ArjunJagdale",
  "id": 142811259,
  "login": "ArjunJagdale",
  "node_id": "U_kgDOCIMgew",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/ArjunJagdale",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7658/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7658/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7658.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7658",
  "merged_at": null,
  "patch_url": "https://github.com/huggingface/datasets/pull/7658.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7658"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7657 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7657/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7657/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7657/events | 
	https://github.com/huggingface/datasets/pull/7657 | 3,186,036,016 | 
	PR_kwDODunzps6cks2E | 7,657 | 
	feat: add subset_name as alias for name in load_dataset | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/ArjunJagdale",
  "id": 142811259,
  "login": "ArjunJagdale",
  "node_id": "U_kgDOCIMgew",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/ArjunJagdale",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-06-29T10:39:00 | 2025-06-29T10:55:11 | null | 
	CONTRIBUTOR | null | null | null | 
	fixes #7637
This PR introduces subset_name as a user-facing alias for the name (previously `config_name`) argument in load_dataset. It aligns terminology with the Hugging Face Hub UI (which shows “Subset”), reducing confusion for new users.
Supports `subset_name` in `load_dataset()`
Adds `.subset_name` property to DatasetBuilder
Maintains full backward compatibility
Raises clear error if name and `subset_name` conflict | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7657/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7657/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7657.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7657",
  "merged_at": null,
  "patch_url": "https://github.com/huggingface/datasets/pull/7657.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7657"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7656 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7656/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7656/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7656/events | 
	https://github.com/huggingface/datasets/pull/7656 | 3,185,865,686 | 
	PR_kwDODunzps6ckPHc | 7,656 | 
	fix(iterable): ensure MappedExamplesIterable supports state_dict for resume | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/ArjunJagdale",
  "id": 142811259,
  "login": "ArjunJagdale",
  "node_id": "U_kgDOCIMgew",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/ArjunJagdale",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-06-29T07:50:13 | 2025-06-29T07:50:13 | null | 
	CONTRIBUTOR | null | null | null | 
	Fixes #7630
### Problem
When calling `.map()` on an `IterableDataset`, resuming from a checkpoint skips a large number of samples. This is because `MappedExamplesIterable` did not implement `state_dict()` or `load_state_dict()`, so checkpointing was not properly delegated to the underlying iterable.
### What This PR Does
This patch adds:
```python
def state_dict(self):
    return self.ex_iterable.state_dict()
def load_state_dict(self, state):
    self.ex_iterable.load_state_dict(state)
```
to MappedExamplesIterable, so the wrapped base iterable's state can be saved and restored as expected.
Result
Using .map() no longer causes sample skipping after checkpoint resume.
Let me know if a dedicated test case is required — happy to add one! | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7656/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7656/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7656.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7656",
  "merged_at": null,
  "patch_url": "https://github.com/huggingface/datasets/pull/7656.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7656"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7655 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7655/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7655/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7655/events | 
	https://github.com/huggingface/datasets/pull/7655 | 3,185,382,105 | 
	PR_kwDODunzps6ci9oi | 7,655 | 
	Added specific use cases in Improve Performace | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/ArjunJagdale",
  "id": 142811259,
  "login": "ArjunJagdale",
  "node_id": "U_kgDOCIMgew",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/ArjunJagdale",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-06-28T19:00:32 | 2025-06-28T19:00:32 | null | 
	CONTRIBUTOR | null | null | null | 
	Fixes #2494 | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7655/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7655/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7655.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7655",
  "merged_at": null,
  "patch_url": "https://github.com/huggingface/datasets/pull/7655.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7655"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7654 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7654/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7654/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7654/events | 
	https://github.com/huggingface/datasets/pull/7654 | 3,184,770,992 | 
	PR_kwDODunzps6chPmz | 7,654 | 
	fix(load): strip deprecated use_auth_token from config_kwargs | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/ArjunJagdale",
  "id": 142811259,
  "login": "ArjunJagdale",
  "node_id": "U_kgDOCIMgew",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/ArjunJagdale",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-06-28T09:20:21 | 2025-06-28T09:20:21 | null | 
	CONTRIBUTOR | null | null | null | 
	Fixes #7504
This PR resolves a compatibility error when loading datasets via `load_dataset()` using outdated arguments like `use_auth_token`.
**What was happening:**
Users passing `use_auth_token` in `load_dataset(..., use_auth_token=...)` encountered a `ValueError`: BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key.
**Why:**
`use_auth_token` has been deprecated and removed from config definitions (replaced by `token`), but the `load_dataset()` function still forwarded it via `**config_kwargs` to BuilderConfigs, leading to unrecognized key errors.
**Fix:**
We now intercept and strip `use_auth_token` from `config_kwargs` inside `load_dataset`, replacing it with a warning:
```python
if "use_auth_token" in config_kwargs:
    logger.warning("The 'use_auth_token' argument is deprecated. Please use 'token' instead.")
    config_kwargs.pop("use_auth_token")
```
This ensures legacy compatibility while guiding users to switch to the token argument.
Let me know if you'd prefer a deprecation error instead of a warning. Thanks! | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7654/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7654/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7654.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7654",
  "merged_at": null,
  "patch_url": "https://github.com/huggingface/datasets/pull/7654.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7654"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7653 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7653/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7653/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7653/events | 
	https://github.com/huggingface/datasets/pull/7653 | 3,184,746,093 | 
	PR_kwDODunzps6chLmb | 7,653 | 
	feat(load): fallback to `load_from_disk()` when loading a saved dataset directory | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/ArjunJagdale",
  "id": 142811259,
  "login": "ArjunJagdale",
  "node_id": "U_kgDOCIMgew",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/ArjunJagdale",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-06-28T08:47:36 | 2025-06-28T08:47:36 | null | 
	CONTRIBUTOR | null | null | null | 
	### Related Issue
Fixes #7503  
Partially addresses #5044 by allowing `load_dataset()` to auto-detect and gracefully delegate to `load_from_disk()` for locally saved datasets.
---
### What does this PR do?
This PR introduces a minimal fallback mechanism in `load_dataset()` that detects when the provided `path` points to a dataset saved using `save_to_disk()`, and automatically redirects to `load_from_disk()`.
#### 🐛 Before (unexpected metadata-only rows):
```python
ds = load_dataset("/path/to/saved_dataset")
# → returns rows with only internal metadata (_data_files, _fingerprint, etc.)
````
#### ✅ After (graceful fallback):
```python
ds = load_dataset("/path/to/saved_dataset")
# → logs a warning and internally switches to load_from_disk()
```
---
### Why is this useful?
* Prevents confusion when reloading local datasets saved via `save_to_disk()`.
* Enables smoother compatibility with frameworks (e.g., TRL, `lighteval`) that rely on `load_dataset()` calls.
* Fully backward-compatible — hub-based loading, custom builders, and streaming remain untouched.
 | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7653/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7653/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7653.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7653",
  "merged_at": null,
  "patch_url": "https://github.com/huggingface/datasets/pull/7653.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7653"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7652 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7652/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7652/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7652/events | 
	https://github.com/huggingface/datasets/pull/7652 | 3,183,372,055 | 
	PR_kwDODunzps6cdCnv | 7,652 | 
	Add columns support to JSON loader for selective key filtering | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/ArjunJagdale",
  "id": 142811259,
  "login": "ArjunJagdale",
  "node_id": "U_kgDOCIMgew",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/ArjunJagdale",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "I need this feature right now. It would be great if it could automatically fill in None for non-existent keys instead of reporting an error.",
  "> I need this feature right now. It would be great if it could automatically fill in None for non-existent keys instead of reporting an error.\r\n\r\nHi @aihao2000, Just to confirm — I have done the changes you asked for!\r\nIf you pass columns=[\"key1\", \"key2\", \"optional_key\"] to load_dataset(..., columns=...), and any of those keys are missing from the input JSON objects, the loader will automatically fill those columns with None values, instead of raising an error."
] | 2025-06-27T16:18:42 | 2025-07-03T09:52:48 | null | 
	CONTRIBUTOR | null | null | null | 
	Fixes #7594 
This PR adds support for filtering specific columns when loading datasets from .json or .jsonl files — similar to how the columns=... argument works for Parquet.
As suggested, support for the `columns=...` argument (previously available for Parquet) has now been extended to **JSON and JSONL** loading via `load_dataset(...)`. You can now load only specific keys/columns and skip the rest — which should help in cases where some fields are unclean, inconsistent, or just unnecessary.
### Example:
```python
from datasets import load_dataset
dataset = load_dataset("json", data_files="your_data.jsonl", columns=["id", "title"])
print(dataset["train"].column_names)
# Output: ['id', 'title']
```
### Summary of changes:
* Added `columns: Optional[List[str]]` to `JsonConfig`
* Updated `_generate_tables()` to filter selected columns
* Forwarded `columns` argument from `load_dataset()` to the config
* Added test for validation(should be fine!)
Let me know if you'd like the same to be added for CSV or others as a follow-up — happy to help. | null | 
	{
  "+1": 1,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 1,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7652/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7652/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7652.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7652",
  "merged_at": null,
  "patch_url": "https://github.com/huggingface/datasets/pull/7652.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7652"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7651 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7651/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7651/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7651/events | 
	https://github.com/huggingface/datasets/pull/7651 | 3,182,792,775 | 
	PR_kwDODunzps6cbMmg | 7,651 | 
	fix: Extended metadata file names for folder_based_builder | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/6965756?v=4",
  "events_url": "https://api.github.com/users/iPieter/events{/privacy}",
  "followers_url": "https://api.github.com/users/iPieter/followers",
  "following_url": "https://api.github.com/users/iPieter/following{/other_user}",
  "gists_url": "https://api.github.com/users/iPieter/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/iPieter",
  "id": 6965756,
  "login": "iPieter",
  "node_id": "MDQ6VXNlcjY5NjU3NTY=",
  "organizations_url": "https://api.github.com/users/iPieter/orgs",
  "received_events_url": "https://api.github.com/users/iPieter/received_events",
  "repos_url": "https://api.github.com/users/iPieter/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/iPieter/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/iPieter/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/iPieter",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-06-27T13:12:11 | 2025-06-30T08:19:37 | null | 
	NONE | null | null | null | 
	Fixes #7650.
The metadata files generated by the `DatasetDict.save_to_file` function are not included in the folder_based_builder's metadata list, causing issues when only 1 actual data file is present, as described in issue #7650.
This PR adds these filenames to the builder, allowing correct loading. | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7651/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7651/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7651.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7651",
  "merged_at": null,
  "patch_url": "https://github.com/huggingface/datasets/pull/7651.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7651"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7650 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7650/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7650/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7650/events | 
	https://github.com/huggingface/datasets/issues/7650 | 3,182,745,315 | 
	I_kwDODunzps69tNbj | 7,650 | 
	`load_dataset` defaults to json file format for datasets with 1 shard | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/6965756?v=4",
  "events_url": "https://api.github.com/users/iPieter/events{/privacy}",
  "followers_url": "https://api.github.com/users/iPieter/followers",
  "following_url": "https://api.github.com/users/iPieter/following{/other_user}",
  "gists_url": "https://api.github.com/users/iPieter/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/iPieter",
  "id": 6965756,
  "login": "iPieter",
  "node_id": "MDQ6VXNlcjY5NjU3NTY=",
  "organizations_url": "https://api.github.com/users/iPieter/orgs",
  "received_events_url": "https://api.github.com/users/iPieter/received_events",
  "repos_url": "https://api.github.com/users/iPieter/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/iPieter/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/iPieter/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/iPieter",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-06-27T12:54:25 | 2025-06-27T12:54:25 | null | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	### Describe the bug
I currently have multiple datasets (train+validation) saved as 50MB shards. For one dataset the validation pair is small enough to fit into a single shard and this apparently causes problems when loading the dataset. I created the datasets using a DatasetDict, saved them as 50MB arrow files for streaming and then load each dataset. I have no problem loading any of the other datasets with more than 1 arrow file/shard. 
The error indicates the training set got loaded in arrow format (correct) and the validation set in json (incorrect). This seems to be because some of the metadata files are considered as dataset files. 
```
Error loading /nfs/dataset_pt-uk: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('validation'): ('json', {})} 
```

Concretely, there is a mismatch between the metadata created by the `DatasetDict.save_to_file` and the builder for `datasets.load_dataset`:
https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/data_files.py#L107
The `folder_based_builder` lists all files and with 1 arrow file the json files (that are actually metadata) are in the majority.
https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L58
### Steps to reproduce the bug
Create a dataset with metadata and 1 arrow file in validation and multiple arrow files in the training set, following the above description. In my case, I saved the files via:
```python
        dataset = DatasetDict({
            'train': train_dataset,
            'validation': val_dataset
        })
        
        dataset.save_to_disk(output_path, max_shard_size="50MB")
```
### Expected behavior
The dataset would get loaded.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.14.0-22-generic-x86_64-with-glibc2.41
- Python version: 3.12.7
- `huggingface_hub` version: 0.31.1
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7650/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7650/timeline | null | null | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7649 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7649/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7649/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7649/events | 
	https://github.com/huggingface/datasets/pull/7649 | 3,181,481,444 | 
	PR_kwDODunzps6cW0sQ | 7,649 | 
	Enable parallel shard upload in push_to_hub() using num_proc | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/ArjunJagdale",
  "id": 142811259,
  "login": "ArjunJagdale",
  "node_id": "U_kgDOCIMgew",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/ArjunJagdale",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "it was already added in https://github.com/huggingface/datasets/pull/7606 actually ^^'",
  "Oh sure sure, Closing this one as redundant."
] | 2025-06-27T05:59:03 | 2025-07-07T18:13:53 | 2025-07-07T18:13:52 | 
	CONTRIBUTOR | null | null | null | 
	Fixes #7591
### Add num_proc support to `push_to_hub()` for parallel shard upload
This PR adds support for parallel upload of dataset shards via the `num_proc` argument in `Dataset.push_to_hub()`.
📌 While the `num_proc` parameter was already present in the `push_to_hub()` signature and correctly passed to `_push_parquet_shards_to_hub()`, it was not being used to parallelize the upload.
🔧 This PR updates the internal `_push_parquet_shards_to_hub()` function to:
- Use `multiprocessing.Pool` and `iflatmap_unordered()` for concurrent shard upload when `num_proc > 1`
- Preserve original serial upload behavior if `num_proc` is `None` or ≤ 1
- Keep tqdm progress and commit behavior unchanged
Let me know if any test coverage or further changes are needed!
 | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/ArjunJagdale",
  "id": 142811259,
  "login": "ArjunJagdale",
  "node_id": "U_kgDOCIMgew",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/ArjunJagdale",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7649/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7649/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7649.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7649",
  "merged_at": null,
  "patch_url": "https://github.com/huggingface/datasets/pull/7649.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7649"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7648 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7648/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7648/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7648/events | 
	https://github.com/huggingface/datasets/pull/7648 | 3,181,409,736 | 
	PR_kwDODunzps6cWmSn | 7,648 | 
	Fix misleading add_column() usage example in docstring | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/ArjunJagdale",
  "id": 142811259,
  "login": "ArjunJagdale",
  "node_id": "U_kgDOCIMgew",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/ArjunJagdale",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "I believe there are other occurences of cases like this, like select_columns, select, filter, shard and flatten, could you also fix the docstring for them as well before we merge ?",
  "Done! @lhoestq! I've updated the docstring examples for the following methods to clarify that they return new datasets instead of modifying in-place:\r\n\r\n- `select_columns`\r\n- `select`\r\n- `filter`\r\n- `shard`\r\n- `flatten`\r\n",
  "Also, any suggestions on what kind of issues I should work on next? I tried looking on my own, but I’d be happy if you could assign me something — I’ll do my best!\r\n"
] | 2025-06-27T05:27:04 | 2025-07-08T07:25:26 | null | 
	CONTRIBUTOR | null | null | null | 
	Fixes #7611
This PR fixes the usage example in the Dataset.add_column() docstring, which previously implied that add_column() modifies the dataset in-place.
Why:
The method returns a new dataset with the additional column, and users must assign the result to a variable to preserve the change.
This should make the behavior clearer for users.
@lhoestq @davanstrien  | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7648/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7648/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7648.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7648",
  "merged_at": null,
  "patch_url": "https://github.com/huggingface/datasets/pull/7648.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7648"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7647 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7647/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7647/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7647/events | 
	https://github.com/huggingface/datasets/issues/7647 | 3,178,952,517 | 
	I_kwDODunzps69evdF | 7,647 | 
	loading mozilla-foundation--common_voice_11_0 fails | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/5703039?v=4",
  "events_url": "https://api.github.com/users/pavel-esir/events{/privacy}",
  "followers_url": "https://api.github.com/users/pavel-esir/followers",
  "following_url": "https://api.github.com/users/pavel-esir/following{/other_user}",
  "gists_url": "https://api.github.com/users/pavel-esir/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/pavel-esir",
  "id": 5703039,
  "login": "pavel-esir",
  "node_id": "MDQ6VXNlcjU3MDMwMzk=",
  "organizations_url": "https://api.github.com/users/pavel-esir/orgs",
  "received_events_url": "https://api.github.com/users/pavel-esir/received_events",
  "repos_url": "https://api.github.com/users/pavel-esir/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/pavel-esir/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/pavel-esir/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/pavel-esir",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "@claude Could you please address this issue",
  "kinda related: https://github.com/huggingface/datasets/issues/7675"
] | 2025-06-26T12:23:48 | 2025-07-10T14:49:30 | null | 
	NONE | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	### Describe the bug
Hello everyone,
i am trying to load `mozilla-foundation--common_voice_11_0` and it fails. Reproducer 
```
import datasets
datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True)
```
and it fails with
```
File ~/opt/envs/.../lib/python3.10/site-packages/datasets/utils/file_utils.py:827, in _add_retries_to_file_obj_read_method.<locals>.read_with_retries(*args, **kwargs)
    825 for retry in range(1, max_retries + 1):
    826     try:
--> 827         out = read(*args, **kwargs)
    828         break
    829     except (
    830         _AiohttpClientError,
    831         asyncio.TimeoutError,
    832         requests.exceptions.ConnectionError,
    833         requests.exceptions.Timeout,
    834     ) as err:
File /usr/lib/python3.10/codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final)
    319 def decode(self, input, final=False):
    320     # decode input (taking the buffer into account)
    321     data = self.buffer + input
--> 322     (result, consumed) = self._buffer_decode(data, self.errors, final)
    323     # keep undecoded input until the next call
    324     self.buffer = data[consumed:]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
When i remove streaming then everything is good but i need `streaming=True`
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True)
```
### Expected behavior
Expected that it will download dataset
### Environment info
datasets==3.6.0 
python3.10
on all platforms linux/win/mac | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7647/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7647/timeline | null | null | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7646 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7646/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7646/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7646/events | 
	https://github.com/huggingface/datasets/pull/7646 | 3,178,036,854 | 
	PR_kwDODunzps6cLhrM | 7,646 | 
	Introduces automatic subset-level grouping for folder-based dataset builders #7066 | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/ArjunJagdale",
  "id": 142811259,
  "login": "ArjunJagdale",
  "node_id": "U_kgDOCIMgew",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/ArjunJagdale",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "It adds automatic grouping of files into subsets based on their root name (e.g., `train0.jsonl`, `train1.jsonl` → `\"train\"`), as discussed above. The logic is integrated into `FolderBasedBuilder` and is fully tested + documented.\r\n\r\nLet me know if any changes are needed — happy to iterate!",
  "Hi ! I believe the subsets need to be instantiated here as `configs` - not `splits` (which are meant for train/validation/test):\r\n\r\nhttps://github.com/huggingface/datasets/blob/ef762e664a2a1675368ed7a203b0ac8cecca6e19/src/datasets/load.py#L647-L662\r\n\r\nAlso the subset names should probably be inferred only from the parquet/csv/json files and not from png/jpeg/wav/mp4 etc. WDYT ?",
  "> Hi ! I believe the subsets need to be instantiated here as `configs` - not `splits` (which are meant for train/validation/test):\r\n> \r\n> https://github.com/huggingface/datasets/blob/ef762e664a2a1675368ed7a203b0ac8cecca6e19/src/datasets/load.py#L647-L662\r\n> \r\n> Also the subset names should probably be inferred only from the parquet/csv/json files and not from png/jpeg/wav/mp4 etc. WDYT ?\r\n\r\nThanks a lot for the review!\r\n\r\nYou're absolutely right — treating subsets as separate configs instead of overloaded splits makes much more sense. If that approach sounds good to you, I can move the grouping logic to `load.py`, where configs are instantiated, and revise the PR to emit one `BuilderConfig` per grouped subset.\r\n\r\nAlso totally agree on limiting grouping to structured file types — I’d scope this to `.json`, `.jsonl`, `.csv`, and `.parquet`.\r\n\r\nLet me know if this direction sounds good, and I’ll get started on the changes right away!\r\n"
] | 2025-06-26T07:01:37 | 2025-06-27T18:04:04 | null | 
	CONTRIBUTOR | null | null | null | 
	Fixes #7066
This PR introduces automatic **subset-level grouping** for folder-based dataset builders by:
1. Adding a utility function `group_files_by_subset()` that clusters files by root name (ignoring digits and shard suffixes).
2. Integrating this logic into `FolderBasedBuilder._split_generators()` to yield one split per subset.
3. Adding unit tests for the grouping function.
4. Updating the documentation to describe this new behavior under `docs/source/repository_structure.mdx`.
---
### Motivation
Datasets with files like:
```
train0.jsonl
train1.jsonl
animals.jsonl
metadata.jsonl
```
will now be **automatically grouped** as:
- `"train"` subset → `train0.jsonl`, `train1.jsonl`
- `"animals"` subset → `animals.jsonl`
- `"metadata"` subset → `metadata.jsonl`
This enables structured multi-subset loading even when the dataset doesn't follow traditional `train/validation/test` split conventions.
---
### Files Changed
- `src/datasets/data_files.py`: added `group_files_by_subset()` utility
- `src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py`: grouped files before yielding splits
- `tests/test_data_files.py`: added unit test `test_group_files_by_subset`
- `docs/source/repository_structure.mdx`: documented subset grouping for maintainers and users
---
### Benefits
- More flexible and robust dataset split logic
- Enables logical grouping of user-uploaded files without nested folder structure
- Backward-compatible with all existing folder-based configs
---
Ready for review ✅ | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7646/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7646/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7646.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7646",
  "merged_at": null,
  "patch_url": "https://github.com/huggingface/datasets/pull/7646.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7646"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7645 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7645/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7645/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7645/events | 
	https://github.com/huggingface/datasets/pull/7645 | 3,176,810,164 | 
	PR_kwDODunzps6cHkp- | 7,645 | 
	`ClassLabel` docs: Correct value for unknown labels | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/56924246?v=4",
  "events_url": "https://api.github.com/users/l-uuz/events{/privacy}",
  "followers_url": "https://api.github.com/users/l-uuz/followers",
  "following_url": "https://api.github.com/users/l-uuz/following{/other_user}",
  "gists_url": "https://api.github.com/users/l-uuz/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/l-uuz",
  "id": 56924246,
  "login": "l-uuz",
  "node_id": "MDQ6VXNlcjU2OTI0MjQ2",
  "organizations_url": "https://api.github.com/users/l-uuz/orgs",
  "received_events_url": "https://api.github.com/users/l-uuz/received_events",
  "repos_url": "https://api.github.com/users/l-uuz/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/l-uuz/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/l-uuz/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/l-uuz",
  "user_view_type": "public"
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-06-25T20:01:35 | 2025-06-25T20:01:35 | null | 
	NONE | null | null | null | 
	This small change fixes the documentation to to be compliant with what happens in `encode_example`.
https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/features/features.py#L1126-L1129 | null | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7645/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7645/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7645.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7645",
  "merged_at": null,
  "patch_url": "https://github.com/huggingface/datasets/pull/7645.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7645"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7644 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7644/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7644/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7644/events | 
	https://github.com/huggingface/datasets/pull/7644 | 3,176,363,492 | 
	PR_kwDODunzps6cGGfW | 7,644 | 
	fix sequence ci | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7644). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-25T17:07:55 | 2025-06-25T17:10:30 | 2025-06-25T17:08:01 | 
	MEMBER | null | null | null | 
	fix error from https://github.com/huggingface/datasets/pull/7643 | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7644/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7644/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7644.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7644",
  "merged_at": "2025-06-25T17:08:01Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/7644.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7644"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7643 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7643/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7643/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7643/events | 
	https://github.com/huggingface/datasets/pull/7643 | 3,176,354,431 | 
	PR_kwDODunzps6cGEeK | 7,643 | 
	Backward compat sequence instance | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7643). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-25T17:05:09 | 2025-06-25T17:07:40 | 2025-06-25T17:05:44 | 
	MEMBER | null | null | null | 
	useful to still get `isinstance(Sequence(Value("int64")), Sequence)`for downstream libs like evaluate | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7643/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7643/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7643.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7643",
  "merged_at": "2025-06-25T17:05:43Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/7643.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7643"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7642 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7642/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7642/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7642/events | 
	https://github.com/huggingface/datasets/pull/7642 | 3,176,025,890 | 
	PR_kwDODunzps6cE_Wr | 7,642 | 
	fix length for ci | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[] | 2025-06-25T15:10:38 | 2025-06-25T15:11:53 | 2025-06-25T15:11:51 | 
	MEMBER | null | null | null | null | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7642/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7642/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7642.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7642",
  "merged_at": "2025-06-25T15:11:51Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/7642.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7642"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7641 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7641/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7641/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7641/events | 
	https://github.com/huggingface/datasets/pull/7641 | 3,175,953,405 | 
	PR_kwDODunzps6cEwUl | 7,641 | 
	update docs and docstrings | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7641). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-25T14:48:58 | 2025-06-25T14:51:46 | 2025-06-25T14:49:33 | 
	MEMBER | null | null | null | null | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7641/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7641/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7641.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7641",
  "merged_at": "2025-06-25T14:49:33Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/7641.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7641"
} | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7640 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7640/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7640/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7640/events | 
	https://github.com/huggingface/datasets/pull/7640 | 3,175,914,924 | 
	PR_kwDODunzps6cEofU | 7,640 | 
	better features repr | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7640). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-25T14:37:32 | 2025-06-25T14:46:47 | 2025-06-25T14:46:45 | 
	MEMBER | null | null | null | 
	following the addition of List in #7634 
before:
```python
In [3]: ds.features
Out[3]: 
{'json': {'id': Value(dtype='string', id=None),
  'metadata:transcript': [{'end': Value(dtype='float64', id=None),
    'start': Value(dtype='float64', id=None),
    'transcript': Value(dtype='string', id=None),
    'words': [{'end': Value(dtype='float64', id=None),
      'score': Value(dtype='float64', id=None),
      'start': Value(dtype='float64', id=None),
      'word': Value(dtype='string', id=None)}]}],
  'metadata:vad': [{'end': Value(dtype='float64', id=None),
    'start': Value(dtype='float64', id=None)}]},
 'mp4': Value(dtype='binary', id=None),
 'npz': {'boxes_and_keypoints:box': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'boxes_and_keypoints:is_valid_box': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None),
  'boxes_and_keypoints:keypoints': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
  'movement:EmotionArousalToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'movement:EmotionValenceToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'movement:FAUToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'movement:FAUValue': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'movement:alignment_head_rotation': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'movement:alignment_translation': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
  'movement:emotion_arousal': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'movement:emotion_scores': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'movement:emotion_valence': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'movement:expression': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'movement:frame_latent': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'movement:gaze_encodings': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'movement:head_encodings': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'movement:hypernet_features': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'movement:is_valid': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'smplh:body_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
  'smplh:global_orient': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
  'smplh:is_valid': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None),
  'smplh:left_hand_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
  'smplh:right_hand_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
  'smplh:translation': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None)},
 'wav': Audio(sampling_rate=None, mono=True, decode=True, id=None),
 '__key__': Value(dtype='string', id=None),
 '__url__': Value(dtype='string', id=None)}
```
after:
```python
In [3]: ds.features
Out[3]: 
{'json': {'id': Value('string'),
  'metadata:transcript': List({'end': Value('float64'), 'start': Value('float64'), 'transcript': Value('string'), 'words': List({'end': Value('float64'), 'score': Value('float64'), 'start': Value('float64'), 'word': Value('string')})}),
  'metadata:vad': List({'end': Value('float64'), 'start': Value('float64')})},
 'mp4': Value('binary'),
 'npz': {'boxes_and_keypoints:box': List(List(Value('float32'))),
  'boxes_and_keypoints:is_valid_box': List(Value('bool')),
  'boxes_and_keypoints:keypoints': List(List(List(Value('float32')))),
  'movement:EmotionArousalToken': List(List(Value('float32'))),
  'movement:EmotionValenceToken': List(List(Value('float32'))),
  'movement:FAUToken': List(List(Value('float32'))),
  'movement:FAUValue': List(List(Value('float32'))),
  'movement:alignment_head_rotation': List(List(Value('float32'))),
  'movement:alignment_translation': List(List(List(Value('float32')))),
  'movement:emotion_arousal': List(List(Value('float32'))),
  'movement:emotion_scores': List(List(Value('float32'))),
  'movement:emotion_valence': List(List(Value('float32'))),
  'movement:expression': List(List(Value('float32'))),
  'movement:frame_latent': List(List(Value('float32'))),
  'movement:gaze_encodings': List(List(Value('float32'))),
  'movement:head_encodings': List(List(Value('float32'))),
  'movement:hypernet_features': List(List(Value('float32'))),
  'movement:is_valid': List(List(Value('float32'))),
  'smplh:body_pose': List(List(List(Value('float32')))),
  'smplh:global_orient': List(List(Value('float32'))),
  'smplh:is_valid': List(Value('bool')),
  'smplh:left_hand_pose': List(List(List(Value('float32')))),
  'smplh:right_hand_pose': List(List(List(Value('float32')))),
  'smplh:translation': List(List(Value('float32')))},
 'wav': Audio(sampling_rate=None, decode=True, stream_index=None),
 '__key__': Value('string'),
 '__url__': Value('string')}
``` | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/lhoestq",
  "id": 42851186,
  "login": "lhoestq",
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/lhoestq",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7640/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7640/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/7640.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/7640",
  "merged_at": "2025-06-25T14:46:45Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/7640.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7640"
} | true | 
Dataset Card for GitHub Issues
Dataset Description
Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
Supported Tasks and Leaderboards
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the task-category-tag with an appropriate other:other-task-name).
- task-category-tag: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a high/low metric name. The (model name or model class) model currently achieves the following score. [IF A LEADERBOARD IS AVAILABLE]: This task has an active leaderboard which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.
Languages
Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
When relevant, please provide BCP-47 codes, which consist of a primary language subtag, with a script subtag and/or region subtag if available.
Dataset Structure
Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
{
  'example_field': ...,
  ...
}
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- example_field: description of- example_field
Note that the descriptions can be initialized with the Show Markdown Data Fields output of the tagging app, you will then only need to refine the generated descriptions.
Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
| Tain | Valid | Test | |
|---|---|---|---|
| Input Sentences | |||
| Average Sentence Length | 
Dataset Creation
Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their Hugging Face version.
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
Who are the source language producers?
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
Who are the annotators?
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See Larson 2017 for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
Considerations for Using the Data
Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
Discussion of Biases
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
For Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic.
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
Additional Information
Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
Licensing Information
Provide the license and link to the license webpage if available.
Citation Information
Provide the BibTex-formatted reference for the dataset. For example:
@article{article_id,
  author    = {Author List},
  title     = {Dataset Paper Title},
  journal   = {Publication Venue},
  year      = {2525}
}
If the dataset has a DOI, please provide it here.
Contributions
Thanks to @lewtun for adding this dataset.
- Downloads last month
- 33