Datasets:
Dataset Viewer
url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 51
51
| id
int64 1.13B
3.52B
| node_id
stringlengths 18
18
| number
int64 3.7k
7.82k
| title
stringlengths 1
290
| labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
null | assignees
listlengths 0
3
| milestone
null | comments
int64 0
49
| created_at
int64 1.64k
1.76k
| updated_at
int64 1.64k
1.76k
| closed_at
int64 1.64k
1.76k
β | author_association
stringclasses 4
values | type
null | active_lock_reason
null | body
stringlengths 1
58.6k
β | closed_by
null | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 4
values | user.login
stringlengths 3
26
| user.id
int64 3.5k
219M
| user.node_id
stringlengths 12
20
| user.avatar_url
stringlengths 48
53
| user.gravatar_id
stringclasses 1
value | user.url
stringlengths 32
55
| user.html_url
stringlengths 22
45
| user.followers_url
stringlengths 42
65
| user.following_url
stringlengths 55
78
| user.gists_url
stringlengths 48
71
| user.starred_url
stringlengths 55
78
| user.subscriptions_url
stringlengths 46
69
| user.organizations_url
stringlengths 37
60
| user.repos_url
stringlengths 38
61
| user.events_url
stringlengths 49
72
| user.received_events_url
stringlengths 48
71
| user.type
stringclasses 1
value | user.user_view_type
stringclasses 1
value | user.site_admin
bool 1
class | sub_issues_summary.total
float64 0
0
| sub_issues_summary.completed
float64 0
0
| sub_issues_summary.percent_completed
float64 0
0
| issue_dependencies_summary.blocked_by
float64 0
0
| issue_dependencies_summary.total_blocked_by
float64 0
0
| issue_dependencies_summary.blocking
float64 0
0
| issue_dependencies_summary.total_blocking
float64 0
0
| reactions.url
stringlengths 71
71
| reactions.total_count
int64 0
61
| reactions.+1
int64 0
39
| reactions.-1
int64 0
0
| reactions.laugh
int64 0
0
| reactions.hooray
int64 0
2
| reactions.confused
int64 0
3
| reactions.heart
int64 0
22
| reactions.rocket
int64 0
6
| reactions.eyes
int64 0
5
| draft
null | pull_request.url
null | pull_request.html_url
null | pull_request.diff_url
null | pull_request.patch_url
null | pull_request.merged_at
null | closed_by.login
stringclasses 346
values | closed_by.id
float64 45.3k
183M
β | closed_by.node_id
stringclasses 346
values | closed_by.avatar_url
stringclasses 346
values | closed_by.gravatar_id
stringclasses 1
value | closed_by.url
stringclasses 346
values | closed_by.html_url
stringclasses 346
values | closed_by.followers_url
stringclasses 346
values | closed_by.following_url
stringclasses 346
values | closed_by.gists_url
stringclasses 346
values | closed_by.starred_url
stringclasses 346
values | closed_by.subscriptions_url
stringclasses 346
values | closed_by.organizations_url
stringclasses 346
values | closed_by.repos_url
stringclasses 346
values | closed_by.events_url
stringclasses 346
values | closed_by.received_events_url
stringclasses 346
values | closed_by.type
stringclasses 1
value | closed_by.user_view_type
stringclasses 1
value | closed_by.site_admin
bool 1
class | assignee.login
stringclasses 52
values | assignee.id
float64 192k
135M
β | assignee.node_id
stringclasses 52
values | assignee.avatar_url
stringclasses 52
values | assignee.gravatar_id
stringclasses 1
value | assignee.url
stringclasses 52
values | assignee.html_url
stringclasses 52
values | assignee.followers_url
stringclasses 52
values | assignee.following_url
stringclasses 52
values | assignee.gists_url
stringclasses 52
values | assignee.starred_url
stringclasses 52
values | assignee.subscriptions_url
stringclasses 52
values | assignee.organizations_url
stringclasses 52
values | assignee.repos_url
stringclasses 52
values | assignee.events_url
stringclasses 52
values | assignee.received_events_url
stringclasses 52
values | assignee.type
stringclasses 1
value | assignee.user_view_type
stringclasses 1
value | assignee.site_admin
bool 1
class | milestone.url
stringclasses 1
value | milestone.html_url
stringclasses 1
value | milestone.labels_url
stringclasses 1
value | milestone.id
float64 9.04M
9.04M
β | milestone.node_id
stringclasses 1
value | milestone.number
float64 10
10
β | milestone.title
stringclasses 1
value | milestone.description
stringclasses 1
value | milestone.creator.login
stringclasses 1
value | milestone.creator.id
float64 47.5M
47.5M
β | milestone.creator.node_id
stringclasses 1
value | milestone.creator.avatar_url
stringclasses 1
value | milestone.creator.gravatar_id
stringclasses 1
value | milestone.creator.url
stringclasses 1
value | milestone.creator.html_url
stringclasses 1
value | milestone.creator.followers_url
stringclasses 1
value | milestone.creator.following_url
stringclasses 1
value | milestone.creator.gists_url
stringclasses 1
value | milestone.creator.starred_url
stringclasses 1
value | milestone.creator.subscriptions_url
stringclasses 1
value | milestone.creator.organizations_url
stringclasses 1
value | milestone.creator.repos_url
stringclasses 1
value | milestone.creator.events_url
stringclasses 1
value | milestone.creator.received_events_url
stringclasses 1
value | milestone.creator.type
stringclasses 1
value | milestone.creator.user_view_type
stringclasses 1
value | milestone.creator.site_admin
bool 1
class | milestone.open_issues
float64 3
3
β | milestone.closed_issues
float64 5
5
β | milestone.state
stringclasses 1
value | milestone.created_at
int64 1.68k
1.68k
β | milestone.updated_at
int64 1.72k
1.72k
β | milestone.due_on
null | milestone.closed_at
null | is_pull_request
bool 1
class | comments_text
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7818
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7818/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7818/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7818/events
|
https://github.com/huggingface/datasets/issues/7818
| 3,515,887,618 |
I_kwDODunzps7RkDAC
| 7,818 |
train_test_split and stratify breaks with Numpy 2.0
|
[] |
open
| false | null |
[] | null | 0 | 1,760 | 1,760 | null |
NONE
| null | null |
### Describe the bug
As stated in the title, since Numpy changed in version >2.0 with copy, the stratify parameters break.
e.g. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")` returns a Numpy error.
It works if you downgrade Numpy to a version lower than 2.0.
### Steps to reproduce the bug
1. Numpy > 2.0
2. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")`
### Expected behavior
It returns a stratified split as per the results of Numpy < 2.0
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.35
- Python version: 3.13.7
- Huggingface_hub version: 0.34.4
- PyArrow version: 19.0.0
- Pandas version: 2.3.2
| null |
https://api.github.com/repos/huggingface/datasets/issues/7818/timeline
| null | null |
davebulaval
| 24,845,694 |
MDQ6VXNlcjI0ODQ1Njk0
|
https://avatars.githubusercontent.com/u/24845694?v=4
|
https://api.github.com/users/davebulaval
|
https://github.com/davebulaval
|
https://api.github.com/users/davebulaval/followers
|
https://api.github.com/users/davebulaval/following{/other_user}
|
https://api.github.com/users/davebulaval/gists{/gist_id}
|
https://api.github.com/users/davebulaval/starred{/owner}{/repo}
|
https://api.github.com/users/davebulaval/subscriptions
|
https://api.github.com/users/davebulaval/orgs
|
https://api.github.com/users/davebulaval/repos
|
https://api.github.com/users/davebulaval/events{/privacy}
|
https://api.github.com/users/davebulaval/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7818/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[] |
|
https://api.github.com/repos/huggingface/datasets/issues/7816
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7816/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7816/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7816/events
|
https://github.com/huggingface/datasets/issues/7816
| 3,512,210,206 |
I_kwDODunzps7RWBMe
| 7,816 |
disable_progress_bar() not working as expected
|
[] |
closed
| false | null |
[] | null | 2 | 1,760 | 1,760 | 1,760 |
NONE
| null | null |
### Describe the bug
Hi,
I'm trying to load a dataset on Kaggle TPU image. There is some known compat issue with progress bar on Kaggle, so I'm trying to disable the progress bar globally. This does not work as you can see in [here](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
In contract, disabling progress bar for snapshot_download() works as expected as in [here](https://www.kaggle.com/code/windmaple/snapshot-download-error).
### Steps to reproduce the bug
See this [notebook](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
There is sth. wrong with `shell_paraent`.
### Expected behavior
The downloader should disable progress bar and move forward w/ no error.
### Environment info
The latest version as I did:
!pip install -U datasets ipywidgets ipykernel
| null |
https://api.github.com/repos/huggingface/datasets/issues/7816/timeline
| null |
completed
|
windmaple
| 5,577,741 |
MDQ6VXNlcjU1Nzc3NDE=
|
https://avatars.githubusercontent.com/u/5577741?v=4
|
https://api.github.com/users/windmaple
|
https://github.com/windmaple
|
https://api.github.com/users/windmaple/followers
|
https://api.github.com/users/windmaple/following{/other_user}
|
https://api.github.com/users/windmaple/gists{/gist_id}
|
https://api.github.com/users/windmaple/starred{/owner}{/repo}
|
https://api.github.com/users/windmaple/subscriptions
|
https://api.github.com/users/windmaple/orgs
|
https://api.github.com/users/windmaple/repos
|
https://api.github.com/users/windmaple/events{/privacy}
|
https://api.github.com/users/windmaple/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7816/reactions
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null |
windmaple
| 5,577,741 |
MDQ6VXNlcjU1Nzc3NDE=
|
https://avatars.githubusercontent.com/u/5577741?v=4
|
https://api.github.com/users/windmaple
|
https://github.com/windmaple
|
https://api.github.com/users/windmaple/followers
|
https://api.github.com/users/windmaple/following{/other_user}
|
https://api.github.com/users/windmaple/gists{/gist_id}
|
https://api.github.com/users/windmaple/starred{/owner}{/repo}
|
https://api.github.com/users/windmaple/subscriptions
|
https://api.github.com/users/windmaple/orgs
|
https://api.github.com/users/windmaple/repos
|
https://api.github.com/users/windmaple/events{/privacy}
|
https://api.github.com/users/windmaple/received_events
|
User
|
public
| false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"@xianbaoqian ",
"Closing this one since it's a Xet issue."
] |
||
https://api.github.com/repos/huggingface/datasets/issues/7813
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7813/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7813/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7813/events
|
https://github.com/huggingface/datasets/issues/7813
| 3,503,446,288 |
I_kwDODunzps7Q0lkQ
| 7,813 |
Caching does not work when using python3.14
|
[] |
open
| false | null |
[] | null | 2 | 1,760 | 1,760 | null |
NONE
| null | null |
### Describe the bug
Traceback (most recent call last):
File "/workspace/ctn.py", line 8, in <module>
ds = load_dataset(f"naver-clova-ix/synthdog-{lang}") # ΠΈΠ»ΠΈ "synthdog-zh" Π΄Π»Ρ ΠΊΠΈΡΠ°ΠΉΡΠΊΠΎΠ³ΠΎ
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
path=path,
...<10 lines>...
**config_kwargs,
)
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1185, in load_dataset_builder
builder_instance._use_legacy_cache_dir_if_possible(dataset_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/builder.py", line 612, in _use_legacy_cache_dir_if_possible
self._check_legacy_cache2(dataset_module) or self._check_legacy_cache() or None
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/builder.py", line 485, in _check_legacy_cache2
config_id = self.config.name + "-" + Hasher.hash({"data_files": self.config.data_files})
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/fingerprint.py", line 188, in hash
return cls.hash_bytes(dumps(value))
~~~~~^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 120, in dumps
dump(obj, file)
~~~~^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 114, in dump
Pickler(file, recurse=True).dump(obj)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 428, in dump
StockPickler.dump(self, obj)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 498, in dump
self.save(obj)
~~~~~~~~~^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 70, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 422, in save
StockPickler.save(self, obj, save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 572, in save
f(self, obj) # Call unbound method with explicit self
~^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 1262, in save_module_dict
StockPickler.save_dict(pickler, obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 1064, in save_dict
self._batch_setitems(obj.items(), obj)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
### Steps to reproduce the bug
ds_train = ds["train"].map(lambda x: {**x, "lang": lang})
### Expected behavior
Fixed bugs
### Environment info
- `datasets` version: 4.2.0
- Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.39
- Python version: 3.14.0
- `huggingface_hub` version: 0.35.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
| null |
https://api.github.com/repos/huggingface/datasets/issues/7813/timeline
| null | null |
intexcor
| 142,020,129 |
U_kgDOCHcOIQ
|
https://avatars.githubusercontent.com/u/142020129?v=4
|
https://api.github.com/users/intexcor
|
https://github.com/intexcor
|
https://api.github.com/users/intexcor/followers
|
https://api.github.com/users/intexcor/following{/other_user}
|
https://api.github.com/users/intexcor/gists{/gist_id}
|
https://api.github.com/users/intexcor/starred{/owner}{/repo}
|
https://api.github.com/users/intexcor/subscriptions
|
https://api.github.com/users/intexcor/orgs
|
https://api.github.com/users/intexcor/repos
|
https://api.github.com/users/intexcor/events{/privacy}
|
https://api.github.com/users/intexcor/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7813/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"https://github.com/uqfoundation/dill/issues/725",
"@intexcor does #7817 fix your problem?"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/7811
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7811/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7811/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7811/events
|
https://github.com/huggingface/datasets/issues/7811
| 3,500,741,658 |
I_kwDODunzps7QqRQa
| 7,811 |
SIGSEGV when Python exits due to near null deref
|
[] |
open
| false | null |
[] | null | 4 | 1,760 | 1,760 | null |
NONE
| null | null |
### Describe the bug
When I run the following python script using datasets I get a segfault.
```python
from datasets import load_dataset
from tqdm import tqdm
progress_bar = tqdm(total=(1000), unit='cols', desc='cols ')
progress_bar.update(1)
```
```
% lldb -- python3 crashmin.py
(lldb) target create "python3"
Current executable set to '/Users/ian/bug/venv/bin/python3' (arm64).
(lldb) settings set -- target.run-args "crashmin.py"
(lldb) r
Process 8095 launched: '/Users/ian/bug/venv/bin/python3' (arm64)
Process 8095 stopped
* thread #2, stop reason = exec
frame #0: 0x0000000100014b30 dyld`_dyld_start
dyld`_dyld_start:
-> 0x100014b30 <+0>: mov x0, sp
0x100014b34 <+4>: and sp, x0, #0xfffffffffffffff0
0x100014b38 <+8>: mov x29, #0x0 ; =0
Target 0: (Python) stopped.
(lldb) c
Process 8095 resuming
cols : 0% 0/1000 [00:00<?, ?cols/s]Process 8095 stopped
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10)
frame #0: 0x0000000101783454 _datetime.cpython-313-darwin.so`delta_new + 188
_datetime.cpython-313-darwin.so`delta_new:
-> 0x101783454 <+188>: ldr x3, [x20, #0x10]
0x101783458 <+192>: adrp x0, 10
0x10178345c <+196>: add x0, x0, #0x6fc ; "seconds"
Target 0: (Python) stopped.
(lldb) bt
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10)
* frame #0: 0x0000000101783454 _datetime.cpython-313-darwin.so`delta_new + 188
frame #1: 0x0000000100704b60 Python`type_call + 96
frame #2: 0x000000010067ba34 Python`_PyObject_MakeTpCall + 120
frame #3: 0x00000001007aae3c Python`_PyEval_EvalFrameDefault + 30236
frame #4: 0x000000010067c900 Python`PyObject_CallOneArg + 112
frame #5: 0x000000010070f0a0 Python`slot_tp_finalize + 116
frame #6: 0x000000010070c3b4 Python`subtype_dealloc + 788
frame #7: 0x00000001006c378c Python`insertdict + 756
frame #8: 0x00000001006db2b0 Python`_PyModule_ClearDict + 660
frame #9: 0x000000010080a9a8 Python`finalize_modules + 1772
frame #10: 0x0000000100809a44 Python`_Py_Finalize + 264
frame #11: 0x0000000100837630 Python`Py_RunMain + 252
frame #12: 0x0000000100837ef8 Python`pymain_main + 304
frame #13: 0x0000000100837f98 Python`Py_BytesMain + 40
frame #14: 0x000000019cfcc274 dyld`start + 2840
(lldb) register read x20
x20 = 0x0000000000000000
(lldb)
```
### Steps to reproduce the bug
Run the script above, and observe the segfault.
### Expected behavior
No segfault
### Environment info
```
% pip freeze datasets | grep -i datasets
datasets==4.2.0
(venv) 0 ~/bug 14:58:06
% pip freeze tqdm | grep -i tqdm
tqdm==4.67.1
(venv) 0 ~/bug 14:58:16
% python --version
Python 3.13.7
```
| null |
https://api.github.com/repos/huggingface/datasets/issues/7811/timeline
| null | null |
iankronquist
| 5,192,353 |
MDQ6VXNlcjUxOTIzNTM=
|
https://avatars.githubusercontent.com/u/5192353?v=4
|
https://api.github.com/users/iankronquist
|
https://github.com/iankronquist
|
https://api.github.com/users/iankronquist/followers
|
https://api.github.com/users/iankronquist/following{/other_user}
|
https://api.github.com/users/iankronquist/gists{/gist_id}
|
https://api.github.com/users/iankronquist/starred{/owner}{/repo}
|
https://api.github.com/users/iankronquist/subscriptions
|
https://api.github.com/users/iankronquist/orgs
|
https://api.github.com/users/iankronquist/repos
|
https://api.github.com/users/iankronquist/events{/privacy}
|
https://api.github.com/users/iankronquist/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7811/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"The issue seems to come from `dill` which is a `datasets` dependency, e.g. this segfaults:\n\n```python\nimport dill\nfrom tqdm import tqdm\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\n```\n\n`tqdm` seems to segfault when `dill` is imported. I only found this about segfault but it's maybe not related https://github.com/tqdm/tqdm/issues/1678 ?",
"After more investigation it seems to be because of it imports `__main__`. This segfaults:\n\n```python\nimport __main__\nfrom tqdm import tqdm\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\n```\n\nI opened an issue at https://github.com/tqdm/tqdm/issues/1687",
"Here is a workaround. You can run your code as long as the progress bar is closed before exiting.\n\n```python\nfrom datasets import load_dataset\nfrom tqdm import tqdm\n\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\nprogress_bar.close() # avoids the segfault\n```",
"https://github.com/tqdm/tqdm/issues/1687#issuecomment-3392457094"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/7804
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7804/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7804/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7804/events
|
https://github.com/huggingface/datasets/issues/7804
| 3,498,534,596 |
I_kwDODunzps7Qh2bE
| 7,804 |
Support scientific data formats
|
[] |
open
| false | null |
[] | null | 1 | 1,760 | 1,760 | null |
MEMBER
| null | null |
List of formats and libraries we can use to load the data in `datasets`:
- [ ] DICOMs: pydicom
- [ ] NIfTIs: nibabel
- [ ] WFDB: wfdb
cc @zaRizk7 for viz
Feel free to comment / suggest other formats and libs you'd like to see or to share your interest in one of the mentioned format
| null |
https://api.github.com/repos/huggingface/datasets/issues/7804/timeline
| null | null |
lhoestq
| 42,851,186 |
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7804/reactions
| 6 | 1 | 0 | 0 | 2 | 0 | 3 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"Please add the support for `Zarr`! That's what we use in the Bioimaging community. It is crucial, because raw upload of a *single* bio image can take _terrabytes in memory_!\n\nThe python library would be `bioio` or `zarr`:\n- [ ] Zarr: `bioio` or `zarr`\n\nSee a [Zarr example](https://ome.github.io/ome-ngff-validator/?source=https://uk1s3.embassy.ebi.ac.uk/bia-integrator-data/S-BIAD845/796b9fb8-f4ec-4c4b-bfc3-5cb00ccf19fe/796b9fb8-f4ec-4c4b-bfc3-5cb00ccf19fe.zarr)\n\ncc @joshmoore"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/7802
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7802/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7802/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7802/events
|
https://github.com/huggingface/datasets/issues/7802
| 3,497,454,119 |
I_kwDODunzps7Qduon
| 7,802 |
[Docs] Missing documentation for `Dataset.from_dict`
|
[] |
open
| false | null |
[] | null | 1 | 1,759 | 1,760 | null |
NONE
| null | null |
Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes
Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029
The docstring is present for the function, but seems missing from the official documentation for the `Dataset` class on HuggingFace.
The method in question:
```python
@classmethod
def from_dict(
cls,
mapping: dict,
features: Optional[Features] = None,
info: Optional[DatasetInfo] = None,
split: Optional[NamedSplit] = None,
) -> "Dataset":
"""
Convert `dict` to a `pyarrow.Table` to create a [`Dataset`].
Important: a dataset created with from_dict() lives in memory
and therefore doesn't have an associated cache directory.
This may change in the future, but in the meantime if you
want to reduce memory usage you should write it back on disk
and reload using e.g. save_to_disk / load_from_disk.
Args:
mapping (`Mapping`):
Mapping of strings to Arrays or Python lists.
features ([`Features`], *optional*):
Dataset features.
info (`DatasetInfo`, *optional*):
Dataset information, like description, citation, etc.
split (`NamedSplit`, *optional*):
Name of the dataset split.
Returns:
[`Dataset`]
"""
```
| null |
https://api.github.com/repos/huggingface/datasets/issues/7802/timeline
| null | null |
aaronshenhao
| 69,421,545 |
MDQ6VXNlcjY5NDIxNTQ1
|
https://avatars.githubusercontent.com/u/69421545?v=4
|
https://api.github.com/users/aaronshenhao
|
https://github.com/aaronshenhao
|
https://api.github.com/users/aaronshenhao/followers
|
https://api.github.com/users/aaronshenhao/following{/other_user}
|
https://api.github.com/users/aaronshenhao/gists{/gist_id}
|
https://api.github.com/users/aaronshenhao/starred{/owner}{/repo}
|
https://api.github.com/users/aaronshenhao/subscriptions
|
https://api.github.com/users/aaronshenhao/orgs
|
https://api.github.com/users/aaronshenhao/repos
|
https://api.github.com/users/aaronshenhao/events{/privacy}
|
https://api.github.com/users/aaronshenhao/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7802/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"I'd like to work on this documentation issue."
] |
|
https://api.github.com/repos/huggingface/datasets/issues/7798
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7798/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7798/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7798/events
|
https://github.com/huggingface/datasets/issues/7798
| 3,484,470,782 |
I_kwDODunzps7PsM3-
| 7,798 |
Audio dataset is not decoding on 4.1.1
|
[] |
open
| false | null |
[] | null | 3 | 1,759 | 1,759 | null |
NONE
| null | null |
### Describe the bug
The audio column remain as non-decoded objects even when accessing them.
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
Works fine with `datasets==3.6.0`
Followed the docs in
- https://huggingface.co/docs/datasets/en/audio_load
### Steps to reproduce the bug
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
### Expected behavior
It should decode when accessing the elemenet
### Environment info
4.1.1
ubuntu 22.04
Related
- https://github.com/huggingface/datasets/issues/7707
| null |
https://api.github.com/repos/huggingface/datasets/issues/7798/timeline
| null | null |
thewh1teagle
| 61,390,950 |
MDQ6VXNlcjYxMzkwOTUw
|
https://avatars.githubusercontent.com/u/61390950?v=4
|
https://api.github.com/users/thewh1teagle
|
https://github.com/thewh1teagle
|
https://api.github.com/users/thewh1teagle/followers
|
https://api.github.com/users/thewh1teagle/following{/other_user}
|
https://api.github.com/users/thewh1teagle/gists{/gist_id}
|
https://api.github.com/users/thewh1teagle/starred{/owner}{/repo}
|
https://api.github.com/users/thewh1teagle/subscriptions
|
https://api.github.com/users/thewh1teagle/orgs
|
https://api.github.com/users/thewh1teagle/repos
|
https://api.github.com/users/thewh1teagle/events{/privacy}
|
https://api.github.com/users/thewh1teagle/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7798/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"Previously (datasets<=3.6.0), audio columns were decoded automatically when accessing a row. Now, for performance reasons, audio decoding is lazy by default: you just see the file path unless you explicitly cast the column to Audio.\n\nHereβs the fix (following the current [datasets audio docs](https://huggingface.co/docs/datasets/en/audio_load)\n):\n\n```\nfrom datasets import load_dataset, Audio\n\ndataset = load_dataset(\"MrDragonFox/Elise\", split=\"train\")\n\n# Explicitly decode the audio column\ndataset = dataset.cast_column(\"audio\", Audio(sampling_rate=16_000))\n\nprint(dataset[0][\"audio\"])\n# {'path': '...', 'array': array([...], dtype=float32), 'sampling_rate': 16000}\n```",
"@haitam03-yo's comment is right that the data is not decoded by default anymore indeed, but here is how it works in practice now:\n\nFrom `datasets` v4, audio data are read as [AudioDecoder](https://meta-pytorch.org/torchcodec/0.4/generated/torchcodec.decoders.AudioDecoder.html) objects from torchcodec. This doesn't decode the data by default, but you can call `audio.get_all_samples()` to decode the audio.\n\nSee the documentation on how to process audio data here: https://huggingface.co/docs/datasets/audio_process",
"To resolve this, you need to explicitly cast the audio column to the Audio feature. This will decode the audio data and make it accessible as an array. Here is the corrected code snippet\n\n\nfrom datasets import load_dataset, Audio\n\n# Load your dataset\ndataset = load_dataset(\"MrDragonFox/Elise\", split=\"train\")\n\n# Explicitly cast the 'audio' column to the Audio feature\ndataset = dataset.cast_column(\"audio\", Audio(sampling_rate=16_000))\n\n# Now you can access the decoded audio array\nprint(dataset[0][\"audio\"])\n\nBy adding the cast_column step, you are telling the datasets library to decode the audio data with the specified sampling rate, and you will then be able to access the audio array as you were used to in previous versions."
] |
|
https://api.github.com/repos/huggingface/datasets/issues/7793
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7793/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7793/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7793/events
|
https://github.com/huggingface/datasets/issues/7793
| 3,459,496,971 |
I_kwDODunzps7OM7wL
| 7,793 |
Cannot load dataset, fails with nested data conversions not implemented for chunked array outputs
|
[] |
open
| false | null |
[] | null | 1 | 1,758 | 1,759 | null |
NONE
| null | null |
### Describe the bug
Hi! When I load this dataset, it fails with a pyarrow error. I'm using datasets 4.1.1, though I also see this with datasets 4.1.2
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
Error:
```
Traceback (most recent call last):
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1815, in _prepare_split_single
for _, table in generator:
^^^^^^^^^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/packaged_modules/parquet/parquet.py", line 93, in _generate_tables
for batch_idx, record_batch in enumerate(
~~~~~~~~~^
parquet_fragment.to_batches(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
)
^
):
^
File "pyarrow/_dataset.pyx", line 3904, in _iterator
File "pyarrow/_dataset.pyx", line 3494, in pyarrow._dataset.TaggedRecordBatchIterator.__next__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/neev/scratch/test_hf.py", line 3, in <module>
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/load.py", line 1412, in load_dataset
builder_instance.download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
download_config=download_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
storage_options=storage_options,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
dl_manager=dl_manager,
^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
**download_and_prepare_kwargs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
):
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
### Expected behavior
The dataset loads
### Environment info
Datasets: 4.1.1
Python: 3.13
Platform: Macos
| null |
https://api.github.com/repos/huggingface/datasets/issues/7793/timeline
| null | null |
neevparikh
| 41,182,432 |
MDQ6VXNlcjQxMTgyNDMy
|
https://avatars.githubusercontent.com/u/41182432?v=4
|
https://api.github.com/users/neevparikh
|
https://github.com/neevparikh
|
https://api.github.com/users/neevparikh/followers
|
https://api.github.com/users/neevparikh/following{/other_user}
|
https://api.github.com/users/neevparikh/gists{/gist_id}
|
https://api.github.com/users/neevparikh/starred{/owner}{/repo}
|
https://api.github.com/users/neevparikh/subscriptions
|
https://api.github.com/users/neevparikh/orgs
|
https://api.github.com/users/neevparikh/repos
|
https://api.github.com/users/neevparikh/events{/privacy}
|
https://api.github.com/users/neevparikh/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7793/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"Hey @neevparikh,\nThanks for reporting this! I can reproduce the issue and have identified the root cause.\nProblem: The metr-evals/malt-public dataset contains deeply nested conversation data that exceeds PyArrow's 16MB chunk limit. When PyArrow tries to read it in chunks, it hits a fundamental limitation: \"Nested data conversions not implemented for chunked array outputs\".\nRoot Cause: Your dataset has large nested arrays (conversation trees with 4k-87k elements) that get automatically chunked by PyArrow, but the nested data conversion logic can't handle repetition levels across chunk boundaries\n I'm preparing a PR that adds a fallback mechanism to the parquet reader. When this specific error occurs, it will:\n\nDetect the nested data issue\nCombine chunks selectively for problematic columns\nContinue processing normally\n\nThis maintains backward compatibility while fixing the issue for nested datasets like yours.\nWorkaround (if you need immediate access): Try loading with smaller batch sizes:\npythonds = datasets.load_dataset(\"metr-evals/malt-public\", name=\"irrelevant_detail\", \n download_config=datasets.DownloadConfig(\n parquet_batch_size=1000\n ))"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/7792
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7792/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7792/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7792/events
|
https://github.com/huggingface/datasets/issues/7792
| 3,456,802,210 |
I_kwDODunzps7OCp2i
| 7,792 |
Concatenate IterableDataset instances and distribute underlying shards in a RoundRobin manner
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 16 | 1,758 | 1,759 | null |
NONE
| null | null |
### Feature request
I would like to be able to concatenate multiple `IterableDataset` with possibly different features. I would like to then be able to stream the results in parallel (both using DDP and multiple workers in the pytorch DataLoader). I want the merge of datasets to be well balanced between the different processes.
### Motivation
I want to train a model on a combination of datasets, which I can convert to a single representation. This applies to converting different datasets items to the same Python class, as using a tokenizer on multiple modalities.
Assuming that my original datasets are not necessarily well balanced as they may have different size and thus different number of shards, I would like the merged dataset to be distributed evenly over the multiple processes. I don't mind if it's not perfectly balanced, and as result, some workers of the torch DataLoader do nothing, as long as the DDP is properly handled causing no deadlock.
### What I've tried
I've tried the two functions already provided in datasets, namely `interleave_datasets` and `concatenate_datasets`.
- Interleave seems to be the best approach of what I'm trying to do. However, it doesn't suit my purpose because as I understand it, it stops as soon as one of the dataset source is exhausted, or repeat the smallest source items until the largest is exhausted. I would like something in-between, similarly to what [roundrobin does](https://more-itertools.readthedocs.io/en/stable/api.html#more_itertools.roundrobin).
- Concatenate does not mix the data enough and one dataset may be overrepresented in some early batches.
Let's consider we have 3 datasets composed of different number of shards as follow [[s0_0, s0_1], [s1_0], [s2_0, s2_1, s2_3]], where s denotes the underlying shard, the first index the dataset and the second the shard number.
If we request 3 shards in the `shard_data_source` we should obtain the following:
index 0 gets s0_0 s2_0
index 1 gets s0_1 s2_1
index 2 gets s1_0 s2_3
I started implementing the following, but I'm afraid my sharding logic is incorrect.
```python
from copy import deepcopy
from itertools import chain, islice
import datasets
import numpy as np
from datasets import IterableDataset
from datasets.iterable_dataset import _BaseExamplesIterable
from more_itertools import roundrobin
class MixMultiSourcesExampleIterable(_BaseExamplesIterable):
def __init__(self, ex_iterables: list[_BaseExamplesIterable]):
super().__init__()
self.ex_iterables = ex_iterables
def _init_state_dict(self) -> dict:
self._state_dict = {
"ex_iterables": [ex_iterable._init_state_dict() for ex_iterable in self.ex_iterables],
"type": self.__class__.__name__,
}
return self._state_dict
@property
def num_shards(self) -> int:
return sum(ex_iterable.num_shards for ex_iterable in self.ex_iterables)
def __iter__(self):
yield from roundrobin(*self.ex_iterables)
def shuffle_data_sources(self, generator: np.random.Generator) -> "MixMultiSourcesExampleIterable":
"""Shuffle the list of examples iterable, as well as each underlying examples iterable."""
rng = deepcopy(generator)
ex_iterables = list(self.ex_iterables)
rng.shuffle(ex_iterables)
ex_iterables = [ex_iterable.shuffle_data_sources(generator) for ex_iterable in ex_iterables]
return MixMultiSourcesExampleIterable(ex_iterables)
def shard_data_sources(self, num_shards: int, index: int, contiguous=True) -> "MixMultiSourceExampleIterable":
"""Shard the underlying iterables in a roundrobin manner.
Let's consider we have our iterables as [[s0_0, s0_1], [s1_0], [s2_0, s2_1, s2_3]],
and we request 3 shards.
index 0 gets s0_0 s2_0
index 1 gets s0_1 s2_1
index 2 gets s1_0 s2_3
"""
return MixMultiSourcesExampleIterable(
list(
islice(
# flatten all underlying iterables
chain.from_iterable([ex_iterable.shard_data_sources(1, 0) for ex_iterable in self.ex_iterables]),
# offset the starting point by the index
index,
# take over the full list, so exhaust the iterators
None,
# step by the number of shards requested
num_shards,
)
)
)
def mix_dataset(iterable_datasets: list[datasets.IterableDataset]) -> IterableDataset:
ex_iterable = MixMultiSourcesExampleIterable([ds._ex_iterable for ds in iterable_datasets])
return IterableDataset(
ex_iterable, distributed=iterable_datasets[0]._distributed, formatting=iterable_datasets[0]._formatting
)
```
### Questions
- Am I missing something? Is there a way to use `interleave_datasets` or `concatenate_datasets` to fit my purpose?
- Would it be the right approach to spread the maximum number of underlying shards across my different processes?
### Your contribution
As much as I can.
| null |
https://api.github.com/repos/huggingface/datasets/issues/7792/timeline
| null | null |
LTMeyer
| 13,559,010 |
MDQ6VXNlcjEzNTU5MDEw
|
https://avatars.githubusercontent.com/u/13559010?v=4
|
https://api.github.com/users/LTMeyer
|
https://github.com/LTMeyer
|
https://api.github.com/users/LTMeyer/followers
|
https://api.github.com/users/LTMeyer/following{/other_user}
|
https://api.github.com/users/LTMeyer/gists{/gist_id}
|
https://api.github.com/users/LTMeyer/starred{/owner}{/repo}
|
https://api.github.com/users/LTMeyer/subscriptions
|
https://api.github.com/users/LTMeyer/orgs
|
https://api.github.com/users/LTMeyer/repos
|
https://api.github.com/users/LTMeyer/events{/privacy}
|
https://api.github.com/users/LTMeyer/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7792/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"# With `datasets.Dataset`\n\nHere is an small script that shows the distribution differences of samples between `interleave_datasets`, `concatenate_datasets` and `concatenate_datasets` + shuffling.\n\n```python\nimport datasets as hf_datasets\n\ndef gen(dataset: int, n_samples: int):\n for i in range(n_samples):\n yield {\"dataset\": dataset, \"sample\": i}\n\nds_1 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 0, \"n_samples\": 2})\nds_2 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 1, \"n_samples\": 1})\nds_3 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 2, \"n_samples\": 3})\n\nn_workers = 3\nprint(f\"Simulate run with {n_workers} workers\")\n\nprint(\"Interleave datasets\")\nfor w in range(n_workers):\n ds_interleave = hf_datasets.interleave_datasets([ds_1, ds_2, ds_3]).shard(n_workers, w)\n for i, sample in enumerate(ds_interleave):\n print(f\"Worker {w} process sample {i} {sample}\")\n\nprint(\"Concatenate datasets\")\nfor w in range(n_workers):\n ds_concatenate = hf_datasets.concatenate_datasets([ds_1, ds_2, ds_3]).shard(n_workers, w)\n for i, sample in enumerate(ds_concatenate):\n print(f\"Worker {w} process sample {i} {sample}\")\n\nprint(\"Concated and shuffled datasets\")\nfor w in range(n_workers):\n ds_concatenate = hf_datasets.concatenate_datasets([ds_1, ds_2, ds_3]).shuffle().shard(n_workers, w)\n for i, sample in enumerate(ds_concatenate):\n print(f\"Worker {w} process sample {i} {sample}\")\n```\n\n> Interleave datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 1 process sample 0 {'dataset': 1, 'sample': 0}\nWorker 2 process sample 0 {'dataset': 2, 'sample': 0}\n\n> Concatenate datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 0, 'sample': 1}\nWorker 1 process sample 0 {'dataset': 1, 'sample': 0}\nWorker 1 process sample 1 {'dataset': 2, 'sample': 0}\nWorker 2 process sample 0 {'dataset': 2, 'sample': 1}\nWorker 2 process sample 1 {'dataset': 2, 'sample': 2}\n\n> Concated and shuffled datasets\nWorker 0 process sample 0 {'dataset': 2, 'sample': 2}\nWorker 0 process sample 1 {'dataset': 2, 'sample': 0}\nWorker 1 process sample 0 {'dataset': 0, 'sample': 1}\nWorker 1 process sample 1 {'dataset': 2, 'sample': 1}\nWorker 2 process sample 0 {'dataset': 2, 'sample': 2}\nWorker 2 process sample 1 {'dataset': 0, 'sample': 0}\n\nWithout shuffling, round robin would yield:\n> Worker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 2, 'sample': 0}\nWorker 1 process sample 0 {'dataset': 0, 'sample': 1}\nWorker 1 process sample 1 {'dataset': 2, 'sample': 1}\nWorker 2 process sample 0 {'dataset': 1, 'sample': 0}\nWorker 2 process sample 1 {'dataset': 2, 'sample': 2}",
"# With `datasets.IterableDataset`\n\nThe above works for `Dataset`, but with a sharded `IterableDataset` some data get discarded. See the following results obtained with the script below.\n\n> Simulate run with 3 workers\n\n> Interleave datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 1 fails with list index out of range.\nWorker 2 fails with list index out of range.\nWith dataloader\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n{'dataset': tensor([0]), 'sample': tensor([0])}\n\n> Concatenate datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 1, 'sample': 0}\nWorker 0 process sample 2 {'dataset': 2, 'sample': 0}\nWorker 1 fails with list index out of range\nWorker 2 fails with list index out of range\nWith dataloader\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n{'dataset': tensor([0]), 'sample': tensor([0])}\n{'dataset': tensor([1]), 'sample': tensor([0])}\n{'dataset': tensor([2]), 'sample': tensor([0])}\n\n> Concated and shuffled datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 1, 'sample': 0}\nWorker 0 process sample 2 {'dataset': 2, 'sample': 0}\nWorker 1 fails with list index out of range\nWorker 2 fails with list index out of range\nWith dataloader\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n{'dataset': tensor([0]), 'sample': tensor([0])}\n{'dataset': tensor([1]), 'sample': tensor([0])}\n{'dataset': tensor([2]), 'sample': tensor([0])}\n\n<details>\n\n<summary>Experiment script</summary>\n\n```python\nds_1 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 0, \"n_samples\": 2}).to_iterable_dataset(\n num_shards=2\n)\nds_2 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 1, \"n_samples\": 1}).to_iterable_dataset(\n num_shards=1\n)\nds_3 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 2, \"n_samples\": 3}).to_iterable_dataset(\n num_shards=3\n)\n\nn_workers = 3\nprint(f\"Simulate run with {n_workers} workers\")\n\nprint(\"\\nInterleave datasets\")\nds_interleave = hf_datasets.interleave_datasets([ds_1, ds_2, ds_3])\nfor w in range(n_workers):\n try:\n for i, sample in enumerate(ds_interleave.shard(n_workers, w)):\n print(f\"Worker {w} process sample {i} {sample}\")\n except IndexError as e:\n print(f\"Worker {w} fails with {e}.\")\n\nprint(\"With dataloader\")\nfor sample in torch.utils.data.DataLoader(ds_interleave, num_workers=n_workers):\n print(f\"{sample}\")\n\nprint(\"\\nConcatenate datasets\")\nds_concatenate = hf_datasets.concatenate_datasets([ds_1, ds_2, ds_3])\nfor w in range(n_workers):\n try:\n for i, sample in enumerate(ds_concatenate.shard(n_workers, w)):\n print(f\"Worker {w} process sample {i} {sample}\")\n except IndexError as e:\n print(f\"Worker {w} fails with {e}\")\n\nprint(\"With dataloader\")\nfor sample in torch.utils.data.DataLoader(ds_concatenate, num_workers=n_workers):\n print(f\"{sample}\")\n\nprint(\"\\nConcated and shuffled datasets\")\nds_concatenate = hf_datasets.concatenate_datasets([ds_1, ds_2, ds_3]).shuffle()\nfor w in range(n_workers):\n try:\n for i, sample in enumerate(ds_concatenate.shard(n_workers, w)):\n print(f\"Worker {w} process sample {i} {sample}\")\n except IndexError as e:\n print(f\"Worker {w} fails with {e}\")\n\nprint(\"With dataloader\")\nfor sample in torch.utils.data.DataLoader(ds_concatenate, num_workers=n_workers):\n print(f\"{sample}\")\n```\n\n</details>\n\n# Round Robin with fixed logic\n\n> I started implementing the following, but I'm afraid my sharding logic is incorrect.\n\nHere is a solution for mixing the data in a round robin fashion that allows to distribute the data to all workers. In the previous example above only 1 worker over 3 was actually retrieving data, which resulted in discarding some data.\n\n```python\ndef shard_data_sources(self, num_shards: int, index: int, contiguous=True) -> \"MixMultiSourceExampleIterable\":\n \"\"\"Shard the underlying iterables in a roundrobin manner.\n\n Let's consider we have our iterables as [[s0_0, s0_1], [s1_0], [s2_0, s2_1, s2_3]],\n and we request 3 shards.\n index 0 gets s0_0 s2_0\n index 1 gets s0_1 s2_1\n index 2 gets s1_0 s2_3\n \"\"\"\n return MixMultiSourcesExampleIterable(\n list(\n islice(\n # flatten all underlying iterables (fixed logic)\n [\n ex_iterable.shard_data_sources(ex_iterable.num_shards, index)\n for ex_iterable in self.ex_iterables\n for index in range(ex_iterable.num_shards)\n ],\n # offset the starting point by the index\n index,\n # take over the full list, so exhaust the iterators\n None,\n # step by the number of shards requested\n num_shards,\n )\n )\n )\n```\n\nEditing the example above with the following we obtain the expected result:\n```python\nprint(\"\\nMix datasets\")\nds_mix = mix_dataset([ds_1, ds_2, ds_3])\nfor w in range(n_workers):\n try:\n for i, sample in enumerate(ds_mix.shard(n_workers, w)):\n print(f\"Worker {w} process sample {i} {sample}\")\n except IndexError as e:\n print(f\"Worker {w} fails with {e}\")\n\nprint(\"With dataloader\")\nfor sample in torch.utils.data.DataLoader(ds_mix, num_workers=n_workers):\n print(f\"{sample}\")\n```\n> Mix datasets\nMix datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 2, 'sample': 0}\nWorker 1 process sample 0 {'dataset': 0, 'sample': 1}\nWorker 1 process sample 1 {'dataset': 2, 'sample': 1}\nWorker 2 process sample 0 {'dataset': 1, 'sample': 0}\nWorker 2 process sample 1 {'dataset': 2, 'sample': 2}\nWith dataloader\n{'dataset': tensor([0]), 'sample': tensor([0])}\n{'dataset': tensor([0]), 'sample': tensor([1])}\n{'dataset': tensor([1]), 'sample': tensor([0])}\n{'dataset': tensor([2]), 'sample': tensor([0])}\n{'dataset': tensor([2]), 'sample': tensor([1])}\n{'dataset': tensor([2]), 'sample': tensor([2])}\n\n# Questions \n\n- The example is quite small, showing that some data get discarded, but on large datasets is this significant?\n- How does the suggested solution interplays with shuffling?\n\n\n\n\n",
"# Larger Experiment\n\n> The example is quite small, showing that some data get discarded, but on large datasets is this significant?\n\nContinuing the experiment above, but with 3 larger and unbalanced datasets, with respectively 1000, 150, and 300 samples, and a dataloader with 4 workers:\n \n> Interleave datasets\nWith dataloader\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nYield 300 samples\n\n> Concatenate datasets\nWith dataloader\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nYield 705 samples\n\n> Concated and shuffled datasets\nWith dataloader\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nYield 705 samples\n\n> Mix datasets\nWith dataloader\nYield 1405 samples\n\nThe dataset mixing proposed above is the only one that yields all the samples while using all the dataloaders.\nAdditional checks should include training metrics (does it improve training quality to mix the data like this), and behavior check in a DDP settings, we don't want to face any deadlock due to some GPU having more batches than other. But this later point should be already handled by the iterator of the `IterableDataset`.\n\n# Follow up?\n\n@lhoestq would there be any interest in making a PR of it? Otherwise I can close the issue as I found a solution to my problem. ",
"I believe this PR could solve your issue? :)\n\nhttps://github.com/huggingface/datasets/pull/7786",
"> I believe this PR could solve your issue? :)\n\nThank you @lhoestq for the reply.\nI have just tested it with the script above. It gives:\n\n> Interleave datasets without replacement\nWith dataloader\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nYield 705 samples\n\nIf we compare with the original `interleave_dataset` method it produces 405 samples more. However, it only uses 1 worker on the 4 available. Moreover it doesn't yield all the samples as the mixing strategy with RoundRobin above does (1405 samples vs 705).",
"@LTMeyer With the following script and using the code from #7786 I get all 1450 samples\n\n```\nimport datasets as hf_datasets\n\n\ndef gen(dataset: int, n_samples: int):\n for i in range(n_samples):\n yield {\"dataset\": dataset, \"sample\": i}\n\n\nds_1 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 0, \"n_samples\": 1000}).to_iterable_dataset()\nds_2 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 1, \"n_samples\": 150}).to_iterable_dataset()\nds_3 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 2, \"n_samples\": 300}).to_iterable_dataset()\n\nprint(\"Interleave datasets\")\nds_interleave = hf_datasets.interleave_datasets(\n [ds_1, ds_2, ds_3],\n probabilities=[1 / 3, 1 / 3, 1 / 3],\n stopping_strategy=\"all_exhausted_without_replacement\",\n)\nfor i, sample in enumerate(ds_interleave):\n print(f\"process sample {i} {sample}\")\n```\nI'm not sure on the workers side how many will be spawned and so on. ",
"> [@LTMeyer](https://github.com/LTMeyer) With the following script and using the code from [#7786](https://github.com/huggingface/datasets/pull/7786) I get all 1450 samples\n\nThis depends on the number of shards and the number of processes being used.\nIn the example below there is only one shard per dataset (the default of `to_iterable_dataset` method). Then, the for loop is running in the main process. It thus consumes all the shards, hence the 1450 samples.\n\n> \n> ```\n> import datasets as hf_datasets\n> \n> \n> def gen(dataset: int, n_samples: int):\n> for i in range(n_samples):\n> yield {\"dataset\": dataset, \"sample\": i}\n> \n> \n> ds_1 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 0, \"n_samples\": 1000}).to_iterable_dataset()\n> ds_2 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 1, \"n_samples\": 150}).to_iterable_dataset()\n> ds_3 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 2, \"n_samples\": 300}).to_iterable_dataset()\n> \n> print(\"Interleave datasets\")\n> ds_interleave = hf_datasets.interleave_datasets(\n> [ds_1, ds_2, ds_3],\n> probabilities=[1 / 3, 1 / 3, 1 / 3],\n> stopping_strategy=\"all_exhausted_without_replacement\",\n> )\n> for i, sample in enumerate(ds_interleave):\n> print(f\"process sample {i} {sample}\")\n> ```\n> \n\n\n> I'm not sure on the workers side how many will be spawned and so on.\n\nWhile using the data to train a model, I would like to use the `torch.utils.data.DataLoader` to feed batches of data to my model. To make the data loading fast, it is common to use `num_workers>0` in the dataloader. This will consume data in parallel. In practice, it copies the dataset instance and read in parallel different chunks of data. These chunks correspond to the underlying shards of the iterable dataset.\n\nIf we have 1 shard per dataset, as it is the case in the example above, the dataloading will indeed get all the 1450 samples, but it will run only in one process even if multiple are available. This is inefficient because it doesn't utilize all available resources. See the script and results below.\n\n```python\nfor num_workers in [0, 1, 2, 3, 4]:\n print(f\"Dataloader with {num_workers} workers.\")\n dataloader = DataLoader(ds_interleave, num_workers=num_workers, batch_size=1)\n for i, sample in enumerate(dataloader, start=1):\n pass\n print(f\"{i} processed samples\")\n```\n\n```\nDataloader with 0 workers.\n1450 processed samples\nDataloader with 1 workers.\n1450 processed samples\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\n1450 processed samples\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n1450 processed samples\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\n1450 processed samples\n```\n\nNow if we shard our data differently, like 2, 1, and 3 for each dataset respectively as the [previous example](https://github.com/huggingface/datasets/issues/7792#issuecomment-3345970293), and use a dataloader with different number of workers (same script as above), we obtain:\n\n```\nDataloader with 0 workers.\n1450 processed samples\nDataloader with 1 workers.\n1450 processed samples\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\n850 processed samples\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n750 processed samples\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\n750 processed samples\n```",
"I added a small fix to your PR @radulescupetru to try to make @LTMeyer 's example work :)\n\nCan you confirm it works for you now @LTMeyer ?\n\nNote that maximum parallelism requires each subset to have num_shards >= num_workers, otherwise there aren't enough shards to distribute to every worker for interleaving. In your example one of the subsets has only 1 shard, so only 1 worker can take care of interleaving.",
"> Can you confirm it works for you now [@LTMeyer](https://github.com/LTMeyer) ?\n\nResult with https://github.com/huggingface/datasets/pull/7786/commits/a547d81469128bea4acc3bcc2a4a6a95968936ee:\n```\nDataloader with 0 workers.\n1450 processed samples\nDataloader with 1 workers.\n1450 processed samples\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\n1450 processed samples\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n1450 processed samples\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\n1450 processed samples\n```\n\n I have checked with the script above and I confirm that all samples are now correctly returned, thank you @lhoestq .\n\n> Note that maximum parallelism requires each subset to have num_shards >= num_workers, otherwise there aren't enough shards to distribute to every worker for interleaving. In your example one of the subsets has only 1 shard, so only 1 worker can take care of interleaving.\n\nThis point I'm not sure I understand. That is maybe where @radulescupetru's intent and mine differ. Why should we limit the number of workers to the minimum number of shards? My initial goal was to distribute shards among workers to maximize data loading speed, and to mix the data so batches are representative of the whole dataset and diverse enough (hence the round-robin). \n\nIn the example above, we have 6 shards in total, can we not distribute these shards among workers? That what the `MixMultiSourcesExampleIterable` in https://github.com/huggingface/datasets/issues/7792#issuecomment-3345970293 above does.\n- If 2 workers, 3 shards for each. \n- If 3 workers, 2 shards for each.\n- If 4 workers, the 2 first ones get 2 shards while the two last ones get only 1.\n- Above 6 workers, the 6 first ones get 1 shard each, and the remaining workers get none.\n\n\n",
"@LTMeyer I think it's just a design choice that datasets library took. From my interaction with it, it seems that even when concatenating or interleaving, individual components are still treated individually (for example, num_shards is not summed).\n\nI guess in a real scenario you wouldn't end up with 1 shard only, but it's true that you need to be a bit careful with the setup. For workers it's a bit more automated in the sense that if you have more it will stop the extra ones, but when distributing a dataset over multiple gpus it's even more tricky as if the number of shards is not a factor of world size iterating is slower.",
"> [@LTMeyer](https://github.com/LTMeyer) I think it's just a design choice that datasets library took. From my interaction with it, it seems that even when concatenating or interleaving, individual components are still treated individually (for example, num_shards is not summed).\n\nIndeed. I am curious to know if there is any explanation for this choice that I am missing.\n\n> I guess in a real scenario you wouldn't end up with 1 shard only, but it's true that you need to be a bit careful with the setup. \n\nIn my case I would like to mix many small datasets which are individually based on only few shards. So it's actually close to the case with 1 shard only.\n\n> For workers it's a bit more automated in the sense that if you have more it will stop the extra ones, but when distributing a dataset over multiple gpus it's even more tricky as if the number of shards is not a factor of world size iterating is slower.\n\nMy understanding is that, in a multi-gpu settings, we want each GPU to receive the same number of batches to avoid deadlock in any synchronization process. \nMulti-GPU related sharding of the `IterableDataset` is managed there https://github.com/huggingface/datasets/blob/4.1.1/src/datasets/iterable_dataset.py#L2371-L2392,\nwhile the sharding for dataloaders with multiple workers is handled there https://github.com/huggingface/datasets/blob/4.1.1/src/datasets/iterable_dataset.py#L2292-L2314.\n\nHere is a script to check the behavior in case of multi-gpus, using `split_dataset_by_node`. In the example I consider just 2 GPUs.\n\n```python\nworld_size = 2\nfor num_workers in [0, 1, 2, 3, 4]:\n for rank in range(world_size):\n print(f\"Rank {rank}\")\n ds_interleave_rank = split_dataset_by_node(ds_interleave, rank, world_size)\n print(f\"Dataloader with {num_workers} workers.\")\n dataloader = DataLoader(ds_interleave_rank, num_workers=num_workers, batch_size=1)\n for i in enumerate(dataloader, start=1):\n pass\n print(f\"{i} processed samples\")\n print(\"\\n\")\n```\n\nThe results using https://github.com/huggingface/datasets/pull/7786/commits/455bfaaa6d574aa9d9c9592baee390017512cc5f:\n```\nRank 0\nDataloader with 0 workers.\n725 processed samples\nRank 1\nDataloader with 0 workers.\n725 processed samples\n\n\nRank 0\nDataloader with 1 workers.\n725 processed samples\nRank 1\nDataloader with 1 workers.\n725 processed samples\n\n\nRank 0\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\n725 processed samples\nRank 1\nDataloader with 2 workers.\n725 processed samples\n\n\nRank 0\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n725 processed samples\nRank 1\nDataloader with 3 workers.\n725 processed samples\n\n\nRank 0\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\n725 processed samples\nRank 1\nDataloader with 4 workers.\n725 processed samples\n```\n\nIf now I use the mixing described above the results are:\n```\nRank 0\nDataloader with 0 workers.\n750 processed samples\nRank 1\nDataloader with 0 workers.\n700 processed samples\n\n\nRank 0\nDataloader with 1 workers.\n750 processed samples\nRank 1\nDataloader with 1 workers.\n700 processed samples\n\n\nRank 0\nDataloader with 2 workers.\n750 processed samples\nRank 1\nDataloader with 2 workers.\n700 processed samples\n\n\nRank 0\nDataloader with 3 workers.\n750 processed samples\nRank 1\nDataloader with 3 workers.\n700 processed samples\n\n\nRank 0\nDataloader with 4 workers.\n750 processed samples\nRank 1\nDataloader with 4 workers.\n700 processed samples\n```\n\nDifferent GPUs received different number of batches which is problematic. The interleave method, on the other hand, feeds each GPU with the same number of batches. Nonetheless, it doesn't leverage all available workers.\nI'll check if I can fix the distribution of shards across GPU in the last configuration.",
"When concatenating or interleaving, the resulting `num_shards` is the *minimum `num_shards` of the input datasets*. This allows each new shard to always contain data from every input dataset. This ensures in every shard the right sampling when interleaving and the right data order when concatenating.\n\nSumming the dataset shards isn't ideal since each shard would contain data from only one of the dataset and would not contain any interleaved/concatenated data.",
"Thank you @lhoestq, it makes perfect sense. The part I am missing is that if I concatenate many datasets with small number of shards it will result in a global dataset with not so many shards, thus limiting the use of available workers. Data loading will be consequently inefficient. I was looking for a solution to leverage all parallelism available to maximize data loading speed.\n\nMy original use case was:\nI want to use a dataset stored on the HF hub. It is composed of many subfolders. Each of this subfolder contain only a few shards. I would like to use the dataset but only on a subset of folders, while keeping information about the origin of each sample (i.e. from which subfolder they come from).\nThe first part would possible with the `data_files` argument of `load_dataset` method. However, I would not have the origin information about the sample, as it is not provided in the original dataset. I was thus thinking about considering each subfolder as an independent HF iterable dataset and concatenate them. This method does not work because it drastically reduces the dataloading efficiency due to the low number of shards.\n\n> Summing the dataset shards isn't ideal `since` each shard would contain data from only one of the dataset and would not contain any interleaved/concatenated data.\n\nThis is not necessarily a problem for my use case. It will be the case for the original dataset anyway.",
"Also, I notice in the example above that if we modify the number of shards, we get different number of samples per GPU and workers even with the implementation of @radulescupetru. This will cause a deadlock in the DDP. So I guess HF expects all shards to contain the same number of samples. Is that a correct assumption @lhoestq?\n\nSetting the number of shards for the datasets above to 2, 2 and 3. Using the `interleave_datasets` I get the following:\n```\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 0 workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 0 workers.\n675 processed samples\n\n\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 1 workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 1 workers.\n675 processed samples\n\n\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\nWARNING:datasets.iterable_dataset:Too many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 2 workers.\n675 processed samples\n\n\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\nWARNING:datasets.iterable_dataset:Too many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 3 workers.\n675 processed samples\n\n\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nWARNING:datasets.iterable_dataset:Too many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 4 workers.\n675 processed samples\n```",
"I see @LTMeyer, that makes sense. Do you think we should sum the shards by default for concatenating then ? I feel like your use case is more important than ensuring each worker has data of every subdataset in order.\n\n(I wouldn't touch the interleaving logic though)\n\n> Also, I notice in the example above that if we modify the number of shards, we get different number of samples per GPU and workers even with the implementation of @radulescupetru. This will cause a deadlock in the DDP. So I guess HF expects all shards to contain the same number of samples. Is that a correct assumption @lhoestq?\n\nShards rarely have the same number of samples, so the DDP algorithm itself should be able to stop on its own or have a strategy to circumvent this. For example it can loop until all the nodes have exhausted their data:\n\n```python\ndef loop():\n while True:\n yield from dataloader\n yield \"end\"\n\nfor x in loop():\n if x == \"end\":\n exhausted[rank] = True\n continue\n # stop once the data from all the ranks are exhausted\n dist.all_reduce(exhausted)\n if torch.all(exhausted):\n break\n # do your forward pass + loss here\n # model.forward(...)\n```\n\nI made a full example here: https://github.com/huggingface/datasets/issues/6623#issuecomment-2379458138",
"To summarize, and highlight the distinction with https://github.com/huggingface/datasets/pull/7786, there are actually two feature requests:\n1. Similarly to `interleave_datasets`, we want to interleave the longest dataset without repetition. This is handled by https://github.com/huggingface/datasets/pull/7786, and is consistant with the rest of the HF features (i.e. `concatenate_datasets` and `interleave_datasets`);\n2. We want to be able to _fuse_ datasets and distribute their shards across workers to maximize data loading speed.\n\n > I feel like your use case is more important than ensuring each worker has data of every subdataset in order.\n\nIndeed my use case, pointed as 2. above is first about maximizing data loading speed and second about mixing the data. The order of priority seems to be the opposite in 1.\n\n> Do you think we should sum the shards by default for concatenating then?\n\nI think the library should at least provide a method for this. Users can then decide what matters the most for their use case (data order or dataloading speed). What do you think?\n\n> Shards rarely have the same number of samples, so the DDP algorithm itself should be able to stop on its own or have a strategy to circumvent this.\n\nIf imbalanced data stream in a DDP context is not the responsibility of the datasets library, it is, for me, a reason more to provides a fuse or mix dataset method that sum the shards.\n\n> I made a full example here: https://github.com/huggingface/datasets/issues/6623#issuecomment-2379458138 \n\nThank you for the example. Pytorch now provides also utilities to handle this problematic case, see [Join context manager in DDP](https://docs.pytorch.org/tutorials/advanced/generic_join.html#:%7E:text=The%20context%20manager%20allows%20the,shadowed%20are%20specified%20by%20hooks)"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/7788
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7788/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7788/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7788/events
|
https://github.com/huggingface/datasets/issues/7788
| 3,450,913,796 |
I_kwDODunzps7NsMQE
| 7,788 |
`Dataset.to_sql` doesn't utilize `num_proc`
|
[] |
open
| false | null |
[] | null | 0 | 1,758 | 1,758 | null |
NONE
| null | null |
The underlying `SqlDatasetWriter` has `num_proc` as an available argument [here](https://github.com/huggingface/datasets/blob/5dc1a179783dff868b0547c8486268cfaea1ea1f/src/datasets/io/sql.py#L63) , but `Dataset.to_sql()` does not accept it, therefore it is always using one process for the SQL conversion.
| null |
https://api.github.com/repos/huggingface/datasets/issues/7788/timeline
| null | null |
tcsmaster
| 30,357,072 |
MDQ6VXNlcjMwMzU3MDcy
|
https://avatars.githubusercontent.com/u/30357072?v=4
|
https://api.github.com/users/tcsmaster
|
https://github.com/tcsmaster
|
https://api.github.com/users/tcsmaster/followers
|
https://api.github.com/users/tcsmaster/following{/other_user}
|
https://api.github.com/users/tcsmaster/gists{/gist_id}
|
https://api.github.com/users/tcsmaster/starred{/owner}{/repo}
|
https://api.github.com/users/tcsmaster/subscriptions
|
https://api.github.com/users/tcsmaster/orgs
|
https://api.github.com/users/tcsmaster/repos
|
https://api.github.com/users/tcsmaster/events{/privacy}
|
https://api.github.com/users/tcsmaster/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7788/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[] |
|
https://api.github.com/repos/huggingface/datasets/issues/7780
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7780/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7780/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7780/events
|
https://github.com/huggingface/datasets/issues/7780
| 3,429,267,259 |
I_kwDODunzps7MZnc7
| 7,780 |
BIGPATENT dataset inaccessible (deprecated script loader)
|
[] |
closed
| false | null |
[] | null | 2 | 1,758 | 1,758 | 1,758 |
NONE
| null | null |
dataset: https://huggingface.co/datasets/NortheasternUniversity/big_patent
When I try to load it with the datasets library, it fails with:
RuntimeError: Dataset scripts are no longer supported, but found big_patent.py
Could you please publish a Parquet/Arrow export of BIGPATENT on the Hugging Face so that it can be accessed with datasets>=4.x.
| null |
https://api.github.com/repos/huggingface/datasets/issues/7780/timeline
| null |
completed
|
ishmaifan
| 137,755,081 |
U_kgDOCDX5yQ
|
https://avatars.githubusercontent.com/u/137755081?v=4
|
https://api.github.com/users/ishmaifan
|
https://github.com/ishmaifan
|
https://api.github.com/users/ishmaifan/followers
|
https://api.github.com/users/ishmaifan/following{/other_user}
|
https://api.github.com/users/ishmaifan/gists{/gist_id}
|
https://api.github.com/users/ishmaifan/starred{/owner}{/repo}
|
https://api.github.com/users/ishmaifan/subscriptions
|
https://api.github.com/users/ishmaifan/orgs
|
https://api.github.com/users/ishmaifan/repos
|
https://api.github.com/users/ishmaifan/events{/privacy}
|
https://api.github.com/users/ishmaifan/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7780/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null |
lhoestq
| 42,851,186 |
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"Hi ! I opened https://huggingface.co/datasets/NortheasternUniversity/big_patent/discussions/7 to update the dataset, hopefully it's merged soon !",
"The dataset now works with `datasets` v4 ! closing this issue"
] |
||
https://api.github.com/repos/huggingface/datasets/issues/7777
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7777/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7777/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7777/events
|
https://github.com/huggingface/datasets/issues/7777
| 3,424,462,082 |
I_kwDODunzps7MHSUC
| 7,777 |
push_to_hub not overwriting but stuck in a loop when there are existing commits
|
[] |
closed
| false | null |
[] | null | 4 | 1,758 | 1,758 | 1,758 |
NONE
| null | null |
### Describe the bug
`get_deletions_and_dataset_card` stuck at error a commit has happened error since push to hub for http error 412 for tag 4.1.0. The error does not exists in 4.0.0.
### Steps to reproduce the bug
Create code to use push_to_hub, ran twice each time with different content for datasets.Dataset.
The code will stuck in time.sleep loop for `get_deletions_and_dataset_card`. If error is explicitly printed, the error is HTTP 412.
### Expected behavior
New datasets overwrite existing one on repo.
### Environment info
datasets 4.1.0
| null |
https://api.github.com/repos/huggingface/datasets/issues/7777/timeline
| null |
completed
|
Darejkal
| 55,143,337 |
MDQ6VXNlcjU1MTQzMzM3
|
https://avatars.githubusercontent.com/u/55143337?v=4
|
https://api.github.com/users/Darejkal
|
https://github.com/Darejkal
|
https://api.github.com/users/Darejkal/followers
|
https://api.github.com/users/Darejkal/following{/other_user}
|
https://api.github.com/users/Darejkal/gists{/gist_id}
|
https://api.github.com/users/Darejkal/starred{/owner}{/repo}
|
https://api.github.com/users/Darejkal/subscriptions
|
https://api.github.com/users/Darejkal/orgs
|
https://api.github.com/users/Darejkal/repos
|
https://api.github.com/users/Darejkal/events{/privacy}
|
https://api.github.com/users/Darejkal/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7777/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null |
Darejkal
| 55,143,337 |
MDQ6VXNlcjU1MTQzMzM3
|
https://avatars.githubusercontent.com/u/55143337?v=4
|
https://api.github.com/users/Darejkal
|
https://github.com/Darejkal
|
https://api.github.com/users/Darejkal/followers
|
https://api.github.com/users/Darejkal/following{/other_user}
|
https://api.github.com/users/Darejkal/gists{/gist_id}
|
https://api.github.com/users/Darejkal/starred{/owner}{/repo}
|
https://api.github.com/users/Darejkal/subscriptions
|
https://api.github.com/users/Darejkal/orgs
|
https://api.github.com/users/Darejkal/repos
|
https://api.github.com/users/Darejkal/events{/privacy}
|
https://api.github.com/users/Darejkal/received_events
|
User
|
public
| false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"HTTP 412 means a commit happened in the meantime, so `get_deletions_and_dataset_card` has to retry to get the latest version of the dataset card and what files to delete based on the latest version of the dataset repository\n\nAre you running other operations in the dataset repo for your push_to_hub ?",
"There was only a map() followed by a push_to_hub(). The repo had one prior commit also by using push_to_hub(). The error disappeared when I downgraded datasets to 4.0.0.",
"It is reproducible if you use finegrained token with Read+Write (Open pull request) access to only that repo.",
"Ah it was due to the use of requests_cache with POST methods, closing this. "
] |
||
https://api.github.com/repos/huggingface/datasets/issues/7772
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7772/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7772/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7772/events
|
https://github.com/huggingface/datasets/issues/7772
| 3,417,353,751 |
I_kwDODunzps7LsK4X
| 7,772 |
Error processing scalar columns using tensorflow.
|
[] |
open
| false | null |
[] | null | 2 | 1,757 | 1,758 | null |
NONE
| null | null |
`datasets==4.0.0`
```
columns_to_return = ['input_ids','attention_mask', 'start_positions', 'end_positions']
train_ds.set_format(type='tf', columns=columns_to_return)
```
`train_ds`:
```
train_ds type: <class 'datasets.arrow_dataset.Dataset'>, shape: (1000, 9)
columns: ['question', 'sentences', 'answer', 'str_idx', 'end_idx', 'input_ids', 'attention_mask', 'start_positions', 'end_positions']
features:{'question': Value('string'), 'sentences': Value('string'), 'answer': Value('string'), 'str_idx': Value('int64'), 'end_idx': Value('int64'), 'input_ids': List(Value('int32')), 'attention_mask': List(Value('int8')), 'start_positions': Value('int64'), 'end_positions': Value('int64')}
```
`train_ds_tensor = train_ds['start_positions'].to_tensor(shape=(-1,1))` hits the following error:
```
AttributeError: 'Column' object has no attribute 'to_tensor'
```
`tf.reshape(train_ds['start_positions'], shape=[-1,1])` hits the following error:
```
TypeError: Scalar tensor has no `len()`
```
| null |
https://api.github.com/repos/huggingface/datasets/issues/7772/timeline
| null | null |
khteh
| 3,871,483 |
MDQ6VXNlcjM4NzE0ODM=
|
https://avatars.githubusercontent.com/u/3871483?v=4
|
https://api.github.com/users/khteh
|
https://github.com/khteh
|
https://api.github.com/users/khteh/followers
|
https://api.github.com/users/khteh/following{/other_user}
|
https://api.github.com/users/khteh/gists{/gist_id}
|
https://api.github.com/users/khteh/starred{/owner}{/repo}
|
https://api.github.com/users/khteh/subscriptions
|
https://api.github.com/users/khteh/orgs
|
https://api.github.com/users/khteh/repos
|
https://api.github.com/users/khteh/events{/privacy}
|
https://api.github.com/users/khteh/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7772/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"Using tf.convert_to_tensor works fine:\n\n```\nimport tensorflow as tf\n\nstart_pos = tf.convert_to_tensor(train_ds['start_positions'], dtype=tf.int64)\nstart_pos = tf.reshape(start_pos, [-1, 1])\n```\n\n\nAlternatively, using the built-in to_tf_dataset also avoids the issue:\n\n```\ntrain_tf = train_ds.to_tf_dataset(\n columns=['input_ids','attention_mask'],\n label_cols=['start_positions','end_positions'],\n shuffle=True,\n batch_size=32\n)\n```",
"```\n start_pos = tf.convert_to_tensor(self._train_ds['start_positions'], dtype=tf.int64)\n File \"/home/khteh/.local/share/virtualenvs/pAIthon-GaqEDHQT/lib/python3.13/site-packages/tensorflow/python/util/traceback_utils.py\", line 153, in error_handler\n raise e.with_traceback(filtered_tb) from None\n File \"/home/khteh/.local/share/virtualenvs/pAIthon-GaqEDHQT/lib/python3.13/site-packages/tensorflow/python/framework/constant_op.py\", line 108, in convert_to_eager_tensor\n return ops.EagerTensor(value, ctx.device_name, dtype)\n ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nValueError: TypeError: Scalar tensor has no `len()`\nTraceback (most recent call last):\n\n File \"/home/khteh/.local/share/virtualenvs/pAIthon-GaqEDHQT/lib/python3.13/site-packages/tensorflow/python/framework/ops.py\", line 361, in __len__\n raise TypeError(\"Scalar tensor has no `len()`\")\n\nTypeError: Scalar tensor has no `len()`\n```\n\n`to_tf_dataset` works perfectly."
] |
|
https://api.github.com/repos/huggingface/datasets/issues/7767
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7767/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7767/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7767/events
|
https://github.com/huggingface/datasets/issues/7767
| 3,411,654,444 |
I_kwDODunzps7LWbcs
| 7,767 |
Custom `dl_manager` in `load_dataset`
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 0 | 1,757 | 1,757 | null |
NONE
| null | null |
### Feature request
https://github.com/huggingface/datasets/blob/4.0.0/src/datasets/load.py#L1411-L1418
```
def load_dataset(
...
dl_manager: Optional[DownloadManager] = None, # add this new argument
**config_kwargs,
) -> Union[DatasetDict, Dataset, IterableDatasetDict, IterableDataset]:
...
# Create a dataset builder
builder_instance = load_dataset_builder(
path=path,
name=name,
data_dir=data_dir,
data_files=data_files,
cache_dir=cache_dir,
features=features,
download_config=download_config,
download_mode=download_mode,
revision=revision,
token=token,
storage_options=storage_options,
**config_kwargs,
)
# Return iterable dataset in case of streaming
if streaming:
return builder_instance.as_streaming_dataset(split=split)
# Note: This is the revised part
if dl_manager is None:
if download_config is None:
download_config = DownloadConfig(
cache_dir=builder_instance._cache_downloaded_dir,
force_download=download_mode == DownloadMode.FORCE_REDOWNLOAD,
force_extract=download_mode == DownloadMode.FORCE_REDOWNLOAD,
use_etag=False,
num_proc=num_proc,
token=builder_instance.token,
storage_options=builder_instance.storage_options,
) # We don't use etag for data files to speed up the process
dl_manager = DownloadManager(
dataset_name=builder_instance.dataset_name,
download_config=download_config,
data_dir=builder_instance.config.data_dir,
record_checksums=(
builder_instance._record_infos or verification_mode == VerificationMode.ALL_CHECKS
),
)
# Download and prepare data
builder_instance.download_and_prepare(
download_config=download_config,
download_mode=download_mode,
verification_mode=verification_mode,
dl_manager=dl_manager, # pass the new argument
num_proc=num_proc,
storage_options=storage_options,
)
...
```
### Motivation
In my case, I'm hoping to deal with the cache files downloading manually (not using hash filenames and save to another location, or using potential existing local files).
### Your contribution
It's already implemented above. If maintainers think this should be considered, I'll open a PR.
| null |
https://api.github.com/repos/huggingface/datasets/issues/7767/timeline
| null | null |
ain-soph
| 13,214,530 |
MDQ6VXNlcjEzMjE0NTMw
|
https://avatars.githubusercontent.com/u/13214530?v=4
|
https://api.github.com/users/ain-soph
|
https://github.com/ain-soph
|
https://api.github.com/users/ain-soph/followers
|
https://api.github.com/users/ain-soph/following{/other_user}
|
https://api.github.com/users/ain-soph/gists{/gist_id}
|
https://api.github.com/users/ain-soph/starred{/owner}{/repo}
|
https://api.github.com/users/ain-soph/subscriptions
|
https://api.github.com/users/ain-soph/orgs
|
https://api.github.com/users/ain-soph/repos
|
https://api.github.com/users/ain-soph/events{/privacy}
|
https://api.github.com/users/ain-soph/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7767/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[] |
|
https://api.github.com/repos/huggingface/datasets/issues/7766
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7766/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7766/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7766/events
|
https://github.com/huggingface/datasets/issues/7766
| 3,411,611,165 |
I_kwDODunzps7LWQ4d
| 7,766 |
cast columns to Image/Audio/Video with `storage_options`
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 5 | 1,757 | 1,758 | null |
NONE
| null | null |
### Feature request
Allow `storage_options` to be passed in
1. `cast` related operations (e.g., `cast_columns, cast`)
2. `info` related reading (e.g., `from_dict, from_pandas, from_polars`) together with `info.features`
```python3
import datasets
image_path = "s3://bucket/sample.png"
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
# dataset = dataset.cast_column("image_path", datasets.Image()) # now works without `storage_options`
# expected behavior
dataset = dataset.cast_column("image_path", datasets.Image(), storage_options={"anon": True})
```
### Motivation
I'm using my own registered fsspec filesystem (s3 with customized local cache support). I need to pass cache folder paths `cache_dirs: list[str]` to the filesystem when I read the remote images (cast from file_paths).
### Your contribution
Could help with a PR at weekends
| null |
https://api.github.com/repos/huggingface/datasets/issues/7766/timeline
| null | null |
ain-soph
| 13,214,530 |
MDQ6VXNlcjEzMjE0NTMw
|
https://avatars.githubusercontent.com/u/13214530?v=4
|
https://api.github.com/users/ain-soph
|
https://github.com/ain-soph
|
https://api.github.com/users/ain-soph/followers
|
https://api.github.com/users/ain-soph/following{/other_user}
|
https://api.github.com/users/ain-soph/gists{/gist_id}
|
https://api.github.com/users/ain-soph/starred{/owner}{/repo}
|
https://api.github.com/users/ain-soph/subscriptions
|
https://api.github.com/users/ain-soph/orgs
|
https://api.github.com/users/ain-soph/repos
|
https://api.github.com/users/ain-soph/events{/privacy}
|
https://api.github.com/users/ain-soph/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7766/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"A",
"1",
"1",
"Ok",
"> ### Feature request\n> Allow `storage_options` to be passed in\n> \n> 1. `cast` related operations (e.g., `cast_columns, cast`)\n> 2. `info` related reading (e.g., `from_dict, from_pandas, from_polars`) together with `info.features`\n> \n> import datasets\n> \n> image_path = \"s3://bucket/sample.png\"\n> dataset = datasets.Dataset.from_dict({\"image_path\": [image_path]})\n> \n> # dataset = dataset.cast_column(\"image_path\", datasets.Image()) # now works without `storage_options`\n> \n> # expected behavior\n> dataset = dataset.cast_column(\"image_path\", datasets.Image(), storage_options={\"anon\": True})\n> ### Motivation\n> I'm using my own registered fsspec filesystem (s3 with customized local cache support). I need to pass cache folder paths `cache_dirs: list[str]` to the filesystem when I read the remote images (cast from file_paths).\n> \n> ### Your contribution\n> Could help with a PR at weekends\n\n\n\n>"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/7765
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7765/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7765/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7765/events
|
https://github.com/huggingface/datasets/issues/7765
| 3,411,556,378 |
I_kwDODunzps7LWDga
| 7,765 |
polars dataset cannot cast column to Image/Audio/Video
|
[] |
closed
| false | null |
[] | null | 2 | 1,757 | 1,760 | 1,760 |
NONE
| null | null |
### Describe the bug
`from_polars` dataset cannot cast column to Image/Audio/Video, while it works on `from_pandas` and `from_dict`
### Steps to reproduce the bug
```python3
import datasets
import pandas as pd
import polars as pl
image_path = "./sample.png"
# polars
df = pl.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_polars(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # raises Error
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from large_string to struct using function cast_struct
# pandas
df = pd.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_pandas(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
# dict
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
```
### Expected behavior
`from_polars` case shouldn't raise error and have the same outputs as `from_pandas` and `from_dict`
### Environment info
```
# Name Version Build Channel
datasets 4.0.0 pypi_0 pypi
pandas 2.3.1 pypi_0 pypi
polars 1.32.3 pypi_0 pypi
```
| null |
https://api.github.com/repos/huggingface/datasets/issues/7765/timeline
| null |
completed
|
ain-soph
| 13,214,530 |
MDQ6VXNlcjEzMjE0NTMw
|
https://avatars.githubusercontent.com/u/13214530?v=4
|
https://api.github.com/users/ain-soph
|
https://github.com/ain-soph
|
https://api.github.com/users/ain-soph/followers
|
https://api.github.com/users/ain-soph/following{/other_user}
|
https://api.github.com/users/ain-soph/gists{/gist_id}
|
https://api.github.com/users/ain-soph/starred{/owner}{/repo}
|
https://api.github.com/users/ain-soph/subscriptions
|
https://api.github.com/users/ain-soph/orgs
|
https://api.github.com/users/ain-soph/repos
|
https://api.github.com/users/ain-soph/events{/privacy}
|
https://api.github.com/users/ain-soph/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7765/reactions
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null |
lhoestq
| 42,851,186 |
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"I fixed this with a combination of `to_dict` and `from_dict`:\n\n```py\ndatasets.Dataset.from_dict(df.to_dict(as_series=False))\n```",
"@samuelstevens Yeah, I'm using similar workaround as well. But it would be ideal if we can avoid the copy."
] |
||
https://api.github.com/repos/huggingface/datasets/issues/7760
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7760/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7760/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7760/events
|
https://github.com/huggingface/datasets/issues/7760
| 3,401,799,485 |
I_kwDODunzps7Kw1c9
| 7,760 |
Hugging Face Hub Dataset Upload CAS Error
|
[] |
open
| false | null |
[] | null | 4 | 1,757 | 1,758 | null |
NONE
| null | null |
### Describe the bug
Experiencing persistent 401 Unauthorized errors when attempting to upload datasets to Hugging Face Hub using the `datasets` library. The error occurs specifically with the CAS (Content Addressable Storage) service during the upload process. Tried using HF_HUB_DISABLE_XET=1. It seems to work for smaller files.
Exact error message :
```
Processing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-10T09:44:35.657565Z ERROR Fatal Error: "cas::upload_xorb" api call failed (request id 01b[...]XXX): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX)
at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113
Processing Files (0 / 0) : 0%| | 0.00B / 184kB, 0.00B/s
New Data Upload : 0%| | 0.00B / 184kB, 0.00B/s
β Failed to push some_dataset: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX
```
Workaround Attempts
1. **Disabled XET**: Set `HF_HUB_DISABLE_XET=1` environment variable
2. **Updated hf-xet**: Use `hf-xet==1.1.9` rather than latest
3. **Verified Authentication**: Confirmed HF token is valid and has write permissions
4. **Tested with Smaller Datasets**:
- 100 samples: β
**SUCCESS** (uploaded successfully)
- 10,000 samples: β **FAILS** (401 Unauthorized)
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
# Create dataset (example with 10,000 samples)
dataset = Dataset.from_dict({
"question": questions,
"answer": answers,
# ... other fields
})
# Split into train/test
dataset_dict = dataset.train_test_split(test_size=0.1)
# Upload to Hub
dataset_dict.push_to_hub("Org/some-dataset")
```
### Expected behavior
## Expected Behavior
- Dataset should upload successfully to Hugging Face Hub
- Progress bars should complete without authentication errors
- Dataset should be accessible at the specified repository URL
## Actual Behavior
- Upload fails consistently with 401 Unauthorized error
- Error occurs specifically during CAS service interaction
- No progress is made on the upload (0% completion)
- Dataset is created on Hugging Face Hub with no data folder
### Environment info
- **Platform**: SageMaker (AWS)
- **Python Version**: 3.12
- **Libraries**:
- `datasets` library (latest version)
- `hf-xet==1.1.9` (attempted fix)
- **Authentication**: Hugging Face token configured
- **Dataset Size**: ~10,000 samples, works for smaller sizes (e.g. 100)
| null |
https://api.github.com/repos/huggingface/datasets/issues/7760/timeline
| null | null |
n-bkoe
| 142,820,182 |
U_kgDOCINDVg
|
https://avatars.githubusercontent.com/u/142820182?v=4
|
https://api.github.com/users/n-bkoe
|
https://github.com/n-bkoe
|
https://api.github.com/users/n-bkoe/followers
|
https://api.github.com/users/n-bkoe/following{/other_user}
|
https://api.github.com/users/n-bkoe/gists{/gist_id}
|
https://api.github.com/users/n-bkoe/starred{/owner}{/repo}
|
https://api.github.com/users/n-bkoe/subscriptions
|
https://api.github.com/users/n-bkoe/orgs
|
https://api.github.com/users/n-bkoe/repos
|
https://api.github.com/users/n-bkoe/events{/privacy}
|
https://api.github.com/users/n-bkoe/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7760/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"cc @jsulz maybe ?",
"Curious! I took a look at this and was unable to see why this would be occurring on our side. Tagging in @jgodlew and @bpronan since they might have insights. \n\n@n-bkoe just a few questions if you wouldn't mind: \n1. What kind of data are you uploading and what is the difference in file size (in bytes) between 100 and 10,000 samples?\n2. Could you provide a specific repository where you encountered this so we could look at to attempt to trace this in our systems?\n3. I cannot currently reproduce this, but I'm just trying locally; have you tried to attempt this outside of SageMaker? I'm wondering if there is something unique about that environment causing this. \n4. How/where did you set `HF_HUB_DISABLE_XET`?",
"Hi, and thank you for your quick answer π \n\n1. Its fairly simple string data, four cols, all string, some long. The script works for data up to 8000 samples long, which is two parquet files totalling 260 kb. It breaks at 10k. \n2. Unfortunately, both data and code is private for now !\n3. I will try \n4. I did it both at CLI level when call my script, and tried inside the python script with os.environ[\"HF_HUB_DISABLE_XET\"] = \"1\"\n\nThe load is also partial, it starts for one file, but does not complete and no data file is pushed. \n\n```\n5. Pushing to Hugging Face Hub...\nPushing dataset to YourOrg/dataset-10000-test_set...\nCreating parquet from Arrow format: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 9/9 [00:00<00:00, 1235.07ba/s]\nProcessing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-11T15:14:37.018887Z ERROR Fatal Error: \"cas::upload_xorb\" api call failed (request id 01K4WNFGSQV1FH8846S0DNS91C): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)\n at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113\n\nProcessing Files (0 / 0) : 0%| | 0.00B / 291kB, 0.00B/s \nNew Data Upload : 0%| | 0.00B / 291kB, 0.00B/s \nβ Failed to push test_set: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\nUploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]\nPushing dataset to YourOrg/dataset-10000-indic_test_set...\nCreating parquet from Arrow format: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 9/9 [00:00<00:00, 1289.10ba/s]\nProcessing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-11T15:14:37.721996Z ERROR Fatal Error: \"cas::upload_xorb\" api call failed (request id 01K4WNFHFPJ2DC5D6JC93172H9): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)\n at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113\n\nProcessing Files (0 / 0) : 0%| | 0.00B / 277kB, 0.00B/s \nNew Data Upload : 0%| | 0.00B / 277kB, 0.00B/s \nβ Failed to push indic_test_set: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\nUploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]\nPushing dataset to YourOrg/dataset-10000-indic_test_set_combined...\nCreating parquet from Arrow format: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6/6 [00:00<00:00, 1310.04ba/s]\nProcessing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-11T15:14:38.685575Z ERROR Fatal Error: \"cas::upload_xorb\" api call failed (request id 01K4WNFJDTVAYM9MFTRDSWKTD6): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)\n at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113\n\nProcessing Files (0 / 0) : 0%| | 0.00B / 184kB, 0.00B/s \nNew Data Upload : 0%| | 0.00B / 184kB, 0.00B/s \nβ Failed to push indic_test_set_combined: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\nUploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]\n\nSummary:\n Succeeded: None\n Failed: [('test_set', 'Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'), ('indic_test_set', 'Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'), ('indic_test_set_combined', 'Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')]\nβ Some datasets failed to upload\n```\n\n",
"Thanks for following up with more details, @n-bkoe \n\nCould you tell me more about your Sagemaker environment and how you are running this script? In testing with your steps to reproduce in a Sagemaker Jupyter notebook instance (and uploading Parquet datasets with splits of anywhere from a few KBs to a few hundred MBs), I've yet to reproduce this error. This makes me believe that it's either something about the Sagemaker environment or the reproduction steps that I'm not yet emulating. \n\nConcerning the `HF_HUB_DISABLE_XET` flag, you should ensure it is set before any package imports and in the same process where you are running the script itself. If either aren't true, then this environment variable will not work. You could also explicitly uninstall `hf-xet` from the environment, although that should be unnecessary with the `HF_HUB_DISABLE_XET` flag."
] |
|
https://api.github.com/repos/huggingface/datasets/issues/7759
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7759/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7759/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7759/events
|
https://github.com/huggingface/datasets/issues/7759
| 3,398,099,513 |
I_kwDODunzps7KiuI5
| 7,759 |
Comment/feature request: Huggingface 502s from GHA
|
[] |
open
| false | null |
[] | null | 0 | 1,757 | 1,757 | null |
NONE
| null | null |
This is no longer a pressing issue, but for completeness I am reporting that in August 26th, GET requests to `https://datasets-server.huggingface.co/info\?dataset\=livebench/math` were returning 502s when invoked from [github actions](https://github.com/UKGovernmentBEIS/inspect_evals/actions/runs/17241892475/job/48921123754) (that link will expire eventually, [here are the logs](https://github.com/user-attachments/files/22233578/logs_44225296943.zip)).
When invoked from actions, it appeared to be consistently failing for ~6 hours. However, these 502s never occurred when the request was invoked from my local machine in that same time period.
I suspect that this is related to how the requests are routed with github actions versus locally.
Its not clear to me if the request even reached huggingface servers or if its the github proxy that stopped it from going through, but I wanted to report it nonetheless in case this is helpful information. I'm curious if huggingface can do anything on their end to confirm cause.
And a feature request for if this happens in the future (assuming huggingface has visibilty on it): A "datasets status" page highlighting if 502s occur for specific individual datasets could be useful for people debugging on the other end of this!
| null |
https://api.github.com/repos/huggingface/datasets/issues/7759/timeline
| null | null |
Scott-Simmons
| 52,365,471 |
MDQ6VXNlcjUyMzY1NDcx
|
https://avatars.githubusercontent.com/u/52365471?v=4
|
https://api.github.com/users/Scott-Simmons
|
https://github.com/Scott-Simmons
|
https://api.github.com/users/Scott-Simmons/followers
|
https://api.github.com/users/Scott-Simmons/following{/other_user}
|
https://api.github.com/users/Scott-Simmons/gists{/gist_id}
|
https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}
|
https://api.github.com/users/Scott-Simmons/subscriptions
|
https://api.github.com/users/Scott-Simmons/orgs
|
https://api.github.com/users/Scott-Simmons/repos
|
https://api.github.com/users/Scott-Simmons/events{/privacy}
|
https://api.github.com/users/Scott-Simmons/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7759/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[] |
|
https://api.github.com/repos/huggingface/datasets/issues/7758
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7758/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7758/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7758/events
|
https://github.com/huggingface/datasets/issues/7758
| 3,395,590,783 |
I_kwDODunzps7KZJp_
| 7,758 |
Option for Anonymous Dataset link
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 0 | 1,757 | 1,757 | null |
NONE
| null | null |
### Feature request
Allow for anonymized viewing of datasets. For instance, something similar to [Anonymous GitHub](https://anonymous.4open.science/).
### Motivation
We generally publish our data through Hugging Face. This has worked out very well as it's both our repository and archive (thanks to the DOI feature!). However, we have an increasing challenge when it comes to sharing our datasets for paper (both conference and journal) submissions. Due to the need to share data anonymously, we can't use the Hugging Face URLs, but datasets tend to be too large for inclusion as a zip. Being able to have an anonymous link would be great since we can't be double-publishing the data.
### Your contribution
Sorry, I don't have a contribution to make to the implementation of this. Perhaps it would be possible to work off the [Anonymous GitHub](https://github.com/tdurieux/anonymous_github) code to generate something analogous with pointers to the data still on Hugging Face's servers (instead of the duplication of data required for the GitHub version)?
| null |
https://api.github.com/repos/huggingface/datasets/issues/7758/timeline
| null | null |
egrace479
| 38,985,481 |
MDQ6VXNlcjM4OTg1NDgx
|
https://avatars.githubusercontent.com/u/38985481?v=4
|
https://api.github.com/users/egrace479
|
https://github.com/egrace479
|
https://api.github.com/users/egrace479/followers
|
https://api.github.com/users/egrace479/following{/other_user}
|
https://api.github.com/users/egrace479/gists{/gist_id}
|
https://api.github.com/users/egrace479/starred{/owner}{/repo}
|
https://api.github.com/users/egrace479/subscriptions
|
https://api.github.com/users/egrace479/orgs
|
https://api.github.com/users/egrace479/repos
|
https://api.github.com/users/egrace479/events{/privacy}
|
https://api.github.com/users/egrace479/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7758/reactions
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[] |
|
https://api.github.com/repos/huggingface/datasets/issues/7757
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7757/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7757/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7757/events
|
https://github.com/huggingface/datasets/issues/7757
| 3,389,535,011 |
I_kwDODunzps7KCDMj
| 7,757 |
Add support for `.conll` file format in datasets
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null | 1 | 1,757 | 1,757 | null |
NONE
| null | null |
### Feature request
Iβd like to request native support in the Hugging Face datasets library for reading .conll files (CoNLL format). This format is widely used in NLP tasks, especially for Named Entity Recognition (NER), POS tagging, and other token classification problems.
Right now `.conll` datasets need to be manually parsed or preprocessed before being loaded into datasets. Having built in support would save time and make workflows smoother for researchers and practitioners.
I propose -
Add a conll dataset builder or file parser to datasets that can:
- Read `.conll` files with customizable delimiters (space, tab).
- Handle sentence/document boundaries (typically indicated by empty lines).
- Support common CoNLL variants (e.g., CoNLL-2000 chunking, CoNLL-2003 NER).
- Output a dataset where each example contains:
- tokens: list of strings
- tags (or similar): list of labels aligned with tokens
Given a .conll snippet like:
```
EU NNP B-ORG
rejects VBZ O
German JJ B-MISC
call NN O
. . O
```
The dataset should load as:
```
{
"tokens": ["EU", "rejects", "German", "call", "."],
"tags": ["B-ORG", "O", "B-MISC", "O", "O"]
}
```
### Motivation
- CoNLL files are a standard benchmark format in NLP (e.g., CoNLL-2003, CoNLL-2000).
- Many users train NER or sequence labeling models (like BERT for token classification) directly on `.conll`
- Right now you have to write your own parsing scripts. Built in support would unify this process and would be much more convenient
### Your contribution
Iβd be happy to contribute by implementing this feature. My plan is to-
- Add a new dataset script (conll.py) to handle .conll files.
- Implement parsing logic that supports sentence/document boundaries and token-label alignment.
- Write unit tests with small `.conll` examples to ensure correctness.
- Add documentation and usage examples so new users can easily load `.conll` datasets.
This would be my first open source contribution, so Iβll follow the `CONTRIBUTING.md` guidelines closely and adjust based on feedback from the maintainers.
| null |
https://api.github.com/repos/huggingface/datasets/issues/7757/timeline
| null | null |
namesarnav
| 88,763,593 |
MDQ6VXNlcjg4NzYzNTkz
|
https://avatars.githubusercontent.com/u/88763593?v=4
|
https://api.github.com/users/namesarnav
|
https://github.com/namesarnav
|
https://api.github.com/users/namesarnav/followers
|
https://api.github.com/users/namesarnav/following{/other_user}
|
https://api.github.com/users/namesarnav/gists{/gist_id}
|
https://api.github.com/users/namesarnav/starred{/owner}{/repo}
|
https://api.github.com/users/namesarnav/subscriptions
|
https://api.github.com/users/namesarnav/orgs
|
https://api.github.com/users/namesarnav/repos
|
https://api.github.com/users/namesarnav/events{/privacy}
|
https://api.github.com/users/namesarnav/received_events
|
User
|
public
| false | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7757/reactions
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false |
[
"That would be cool ! feel free to ping me if I can help reviewing a PR"
] |
End of preview. Expand
in Data Studio
GitHub Issue with Comments β Hugging Face/datasets Repository
Dataset Summary
This dataset contains 1,945 structured GitHub issue comments collected from the huggingface/datasets repository. Each comment includes metadata such as author details, timestamps, reactions and association type. The dataset is ideal for analysing open-source community dynamics, sentiment trends, contributor behaviour and natural language patterns in technical discussions.
Dataset Structure
π Fields Overview
| Field Group | Description |
|---|---|
| Issue Metadata | url, repository_url, html_url, id, node_id, number, title, body, state, locked, comments, created_at, updated_at, closed_at, state_reason |
| User Info | user.login, user.id, user.type, user.site_admin |
| Reactions | Counts for +1, -1, laugh, hooray, confused, heart, rocket, eyes |
| Labels | Nested structure containing label name, color, description |
| Assignees / Milestone | Includes user objects and milestone metadata when present |
| Comments | comments_url, integer count comments, and comments_text (list of comment bodies) |
| Pull Request Exclusion | Boolean field is_pull_request used for filtering |
| Derived Data | comments_text is a list of comment strings retrieved using GitHub API calls |
Data Types
| Type | Example Columns |
|---|---|
string |
title, body, state, user.login |
int64 |
id, number, comments, reactions.total_count |
bool |
locked, user.site_admin, is_pull_request |
list[string] |
comments_text |
list[object] |
labels, assignees |
float64 |
Some milestone and dependency metrics |
Source
- Repository: huggingface/datasets
- API Endpoint: https://api.github.com/repos/huggingface/datasets/issues
- Collection Method: API calls (per_page=100, page=1β40). Automated retrieval via GitHub API (authenticated requests) using Python requests and pandas.
- Collection Date: 15 Oct 2025
- License: GitHub content is subject to GitHub Terms of Service
Intended Uses
- NLP Tasks: Sentiment analysis, topic modelling, summarisation
- Community Analytics: Contributor engagement, reaction trends
- Model Training: Fine-tuning on technical dialogue and bug reporting
- Feature Engineering: Text length, emoji usage, time-based features, user roles
Limitations
- Comments are limited to public issues only.
- Incomplete comment history: Only the first 100 comments per issue may be retrieved due to GitHub pagination limits unless further expanded.
- Temporal Bias: The dataset represents the repository state at the time of collection; issue activity is continuously evolving.
- Limited coverage: Data reflects the practices of one repository (huggingface/datasets), not all open-source projects.
Dataset Creation script
- Data was collected using Python requests library by GitHub REST API:
- url = "https://api.github.com/repos/huggingface/datasets/issues?page={page}&per_page=100&state=all"
- headers = {"Authorization": f"token {your_token}"}
- response = requests.get(url, headers=headers)
- Comments were flattened into a tabular format. Timestamps were parsed, and nested fields (e.g., user, reactions) were expanded.
License
- License Name: BigScience OpenRAIL-M License
- The dataset is available for responsible research.
- No harmful, unethical or malicious applications are derived from it.
- Proper attribution to data sources (GitHub and Hugging Face) is maintained.
Ethical Considerations
- All data is publicly available via GitHubβs API.
- Usernames and profile links are included; please respect contributor privacy and avoid misuse.
Privacy considerations
- All data was collected from publicly accessible GitHub issue comments via the GitHub REST API.
- Usernames, profile URLs and other metadata are included as part of the original public content.
- No private or sensitive data was collected or included.
Storage and Distribution:
- The dataset is stored and distributed via the Hugging Face Hub.
- Download size: ~3.85MB
- Number of records: 1,945
References
- GitHub REST API documentation: https://docs.github.com/en/rest
- Hugging Face Datasets Documentation: https://huggingface.co/docs/datasets
- BigScience OpenRAIL-M License: https://huggingface.co/spaces/bigscience/license
Attribution
- This dataset contains text derived from public GitHub Issues and Comments via the GitHub REST API.
- GitHub users, used under the GitHub Terms of Service.
- Redistribution under Bigscience-openrail-m for research and non-commercial use only
- Downloads last month
- 29