Dataset Viewer
url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
3.53B
| node_id
stringlengths 18
32
| number
int64 1
7.82k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
int64 0
70
| created_at
stringdate 2020-04-14 10:18:02
2025-10-20 06:38:19
| updated_at
stringdate 2020-04-27 16:04:17
2025-10-20 06:41:20
| closed_at
stringlengths 3
25
| author_association
stringclasses 4
values | type
float64 | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 4
values | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7824
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7824/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7824/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7824/events
|
https://github.com/huggingface/datasets/pull/7824
| 3,531,240,254
|
PR_kwDODunzps6ukXe9
| 7,824
|
Fix batch_size default description in to_polars docstrings
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 1
|
2025-10-20 06:38:19+00:00
|
2025-10-20 06:41:20+00:00
|
NaT
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7824.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7824",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7824.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7824"
}
|
Fix batch_size default description in `to_polars` docstrings.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7824/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7824/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7823
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7823/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7823/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7823/events
|
https://github.com/huggingface/datasets/pull/7823
| 3,525,440,347
|
PR_kwDODunzps6uRkGa
| 7,823
|
Fix random seed on shuffle and interleave_datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 0
|
2025-10-17 10:21:47+00:00
|
2025-10-17 14:11:18+00:00
|
NaT
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7823.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7823",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7823.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7823"
}
|
closes #7567
Add `shift_rngs` method to `ExamplesIterable` that is called directly after sharding. If a generator is available (not the case for all subclasses) we update the seed of the generator by shifting by the worker_id.
~This is just the fix for `shuffle`, in the corresponding issue `interleave_datasets` is mentioned as well, which won't be fixed with this approach.~
EDIT: This is a fix for `shuffle` and `interleave_datasets`. Adding recursivity to `shift_rngs` solved `interleave_datasets` as well. Not sure though if this is completely safe or if we could destroy something with that. I don't think so but could be wrong and appreciate some guidance from the maintainers. I also checked, on a single_worker we are always handing over `index=0` so that case preserves the seed the user specified.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7823/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7823/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7822
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7822/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7822/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7822/events
|
https://github.com/huggingface/datasets/pull/7822
| 3,525,309,651
|
PR_kwDODunzps6uRKIJ
| 7,822
|
Retry open hf file
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-10-17 09:48:51+00:00
|
2025-10-17 09:52:05+00:00
|
2025-10-17 09:51:35+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7822.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7822",
"merged_at": "2025-10-17T09:51:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7822.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7822"
}
|
Fix this error
```python
File "/workdir/.venv/lib/python3.13/site-packages/datasets/utils/file_utils.py", line 934, in xopen
file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
File "/workdir/.venv/lib/python3.13/site-packages/fsspec/core.py", line 147, in open
return self.__enter__()
~~~~~~~~~~~~~~^^
File "/workdir/.venv/lib/python3.13/site-packages/fsspec/core.py", line 105, in __enter__
f = self.fs.open(self.path, mode=mode)
File "/workdir/.venv/lib/python3.13/site-packages/fsspec/spec.py", line 1338, in open
f = self._open(
path,
...<4 lines>...
**kwargs,
)
File "/workdir/.venv/lib/python3.13/site-packages/huggingface_hub/hf_file_system.py", line 275, in _open
return HfFileSystemFile(self, path, mode=mode, revision=revision, block_size=block_size, **kwargs)
File "/workdir/.venv/lib/python3.13/site-packages/huggingface_hub/hf_file_system.py", line 950, in __init__
self.resolved_path = fs.resolve_path(path, revision=revision)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workdir/.venv/lib/python3.13/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path
repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workdir/.venv/lib/python3.13/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist
self._api.repo_info(
~~~~~~~~~~~~~~~~~~~^
repo_id, revision=revision, repo_type=repo_type, timeout=constants.HF_HUB_ETAG_TIMEOUT
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/workdir/.venv/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/workdir/.venv/lib/python3.13/site-packages/huggingface_hub/hf_api.py", line 2864, in repo_info
return method(
repo_id,
...<4 lines>...
files_metadata=files_metadata,
)
File "/workdir/.venv/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/workdir/.venv/lib/python3.13/site-packages/huggingface_hub/hf_api.py", line 2721, in dataset_info
r = get_session().get(path, headers=headers, timeout=timeout, params=params)
File "/workdir/.venv/lib/python3.13/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/workdir/.venv/lib/python3.13/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/workdir/.venv/lib/python3.13/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/workdir/.venv/lib/python3.13/site-packages/huggingface_hub/utils/_http.py", line 95, in send
return super().send(request, *args, **kwargs)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workdir/.venv/lib/python3.13/site-packages/requests/adapters.py", line 690, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: e7e1ae72-54a0-4ce4-b011-144fb7a3fb06)')
```
which could also be related to
```python
File "/workdir/.venv/lib/python3.13/site-packages/datasets/utils/file_utils.py", line 1364, in _iter_from_urlpaths
raise FileNotFoundError(urlpath)
FileNotFoundError: hf://datasets/.../train-00013-of-00031.parquet
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7822/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7822/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7821
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7821/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7821/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7821/events
|
https://github.com/huggingface/datasets/issues/7821
| 3,520,913,195
|
I_kwDODunzps7R3N8r
| 7,821
|
Building a dataset with large variable size arrays results in error ArrowInvalid: Value X too large to fit in C integer type
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/51880718?v=4",
"events_url": "https://api.github.com/users/kkoutini/events{/privacy}",
"followers_url": "https://api.github.com/users/kkoutini/followers",
"following_url": "https://api.github.com/users/kkoutini/following{/other_user}",
"gists_url": "https://api.github.com/users/kkoutini/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kkoutini",
"id": 51880718,
"login": "kkoutini",
"node_id": "MDQ6VXNlcjUxODgwNzE4",
"organizations_url": "https://api.github.com/users/kkoutini/orgs",
"received_events_url": "https://api.github.com/users/kkoutini/received_events",
"repos_url": "https://api.github.com/users/kkoutini/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kkoutini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkoutini/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kkoutini",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 0
|
2025-10-16 08:45:17+00:00
|
2025-10-16 08:54:14+00:00
|
NaT
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
I used map to store raw audio waveforms of variable lengths in a column of a dataset the `map` call fails with ArrowInvalid: Value X too large to fit in C integer type.
```
Traceback (most recent call last):
Traceback (most recent call last):
File "...lib/python3.12/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/utils/py_utils.py", line 678, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
^^^^^^^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3526, in _map_single
writer.write_batch(batch)
File "...lib/python3.12/site-packages/datasets/arrow_writer.py", line 605, in write_batch
arrays.append(pa.array(typed_sequence))
^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 252, in pyarrow.lib.array
File "pyarrow/array.pxi", line 114, in pyarrow.lib._handle_arrow_array_protocol
File "...lib/python3.12/site-packages/datasets/arrow_writer.py", line 225, in __arrow_array__
out = list_of_np_array_to_pyarrow_listarray(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/features/features.py", line 1538, in list_of_np_array_to_pyarrow_listarray
return list_of_pa_arrays_to_pyarrow_listarray(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/features/features.py", line 1530, in list_of_pa_arrays_to_pyarrow_listarray
offsets = pa.array(offsets, type=pa.int32())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 362, in pyarrow.lib.array
File "pyarrow/array.pxi", line 87, in pyarrow.lib._ndarray_to_array
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Value 2148479376 too large to fit in C integer type
```
### Steps to reproduce the bug
Calling map on a dataset that returns a column with long 1d numpy arrays of variable length.
Example:
```python
# %%
import logging
import datasets
import pandas as pd
import numpy as np
# %%
def process_batch(batch, rank):
res = []
for _ in batch["id"]:
res.append(np.zeros((2**30)).astype(np.uint16))
return {"audio": res}
if __name__ == "__main__":
df = pd.DataFrame(
{
"id": list(range(400)),
}
)
ds = datasets.Dataset.from_pandas(df)
try:
from multiprocess import set_start_method
set_start_method("spawn")
except RuntimeError:
print("Spawn method already set, continuing...")
mapped_ds = ds.map(
process_batch,
batched=True,
batch_size=2,
with_rank=True,
num_proc=2,
cache_file_name="path_to_cache/tmp.arrow",
writer_batch_size=200,
remove_columns=ds.column_names,
# disable_nullable=True,
)
```
### Expected behavior
I think the offsets should be pa.int64() if needed and not forced to be `pa.int32()`
in https://github.com/huggingface/datasets/blob/3e13d30823f8ec498d56adbc18c6880a5463b313/src/datasets/features/features.py#L1535
### Environment info
- `datasets` version: 3.3.1
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
- Python version: 3.12.9
- `huggingface_hub` version: 0.29.0
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7821/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7821/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7820
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7820/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7820/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7820/events
|
https://github.com/huggingface/datasets/pull/7820
| 3,518,633,577
|
PR_kwDODunzps6t6suZ
| 7,820
|
Keep hffs cache in workers when streaming
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-10-15 15:51:28+00:00
|
2025-10-17 09:59:17+00:00
|
2025-10-17 09:59:16+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7820.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7820",
"merged_at": "2025-10-17T09:59:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7820.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7820"
}
|
(and also reorder the hffs args to improve caching)
When using `DataLoader(iterable_dataset, num_workers=...)` the dataset is pickled and passed to the worker. However previously the resulting dataset would be in a process with an empty hffs cache. By keeping the cache attached to `IterableDataset`, the cached hffs instances are pickled with the dataset and re-populates the cache in the DataLoader workers
this requires https://github.com/huggingface/huggingface_hub/pull/3443 to work effectively though, otherwise the unpickled hffs cache would start empty
cc @andimarafioti @ltmeyer
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7820/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7820/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7819
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7819/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7819/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7819/events
|
https://github.com/huggingface/datasets/issues/7819
| 3,517,086,110
|
I_kwDODunzps7Ronme
| 7,819
|
Cannot download opus dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/51946663?v=4",
"events_url": "https://api.github.com/users/liamsun2019/events{/privacy}",
"followers_url": "https://api.github.com/users/liamsun2019/followers",
"following_url": "https://api.github.com/users/liamsun2019/following{/other_user}",
"gists_url": "https://api.github.com/users/liamsun2019/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/liamsun2019",
"id": 51946663,
"login": "liamsun2019",
"node_id": "MDQ6VXNlcjUxOTQ2NjYz",
"organizations_url": "https://api.github.com/users/liamsun2019/orgs",
"received_events_url": "https://api.github.com/users/liamsun2019/received_events",
"repos_url": "https://api.github.com/users/liamsun2019/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/liamsun2019/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liamsun2019/subscriptions",
"type": "User",
"url": "https://api.github.com/users/liamsun2019",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 0
|
2025-10-15 09:06:19+00:00
|
2025-10-15 09:06:19+00:00
|
NaT
|
NONE
| null | null | null | null |
When I tried to download opus_books using:
from datasets import load_dataset
dataset = load_dataset("Helsinki-NLP/opus_books")
I got the following errors:
FileNotFoundError: Couldn't find any data file at /workspace/Helsinki-NLP/opus_books. Couldn't find 'Helsinki-NLP/opus_books' on the Hugging Face Hub either: LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
I also tried:
dataset = load_dataset("opus_books", "en-zh")
and the errors remain the same. However, I can download "mlabonne/FineTome-100k" successfully.
My datasets is version 4.2.0
Any clues? Big thanks.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7819/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7819/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7818
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7818/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7818/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7818/events
|
https://github.com/huggingface/datasets/issues/7818
| 3,515,887,618
|
I_kwDODunzps7RkDAC
| 7,818
|
train_test_split and stratify breaks with Numpy 2.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24845694?v=4",
"events_url": "https://api.github.com/users/davebulaval/events{/privacy}",
"followers_url": "https://api.github.com/users/davebulaval/followers",
"following_url": "https://api.github.com/users/davebulaval/following{/other_user}",
"gists_url": "https://api.github.com/users/davebulaval/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davebulaval",
"id": 24845694,
"login": "davebulaval",
"node_id": "MDQ6VXNlcjI0ODQ1Njk0",
"organizations_url": "https://api.github.com/users/davebulaval/orgs",
"received_events_url": "https://api.github.com/users/davebulaval/received_events",
"repos_url": "https://api.github.com/users/davebulaval/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davebulaval/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davebulaval/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davebulaval",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 1
|
2025-10-15 00:01:19+00:00
|
2025-10-15 06:17:32+00:00
|
NaT
|
NONE
| null | null | null | null |
### Describe the bug
As stated in the title, since Numpy changed in version >2.0 with copy, the stratify parameters break.
e.g. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")` returns a Numpy error.
It works if you downgrade Numpy to a version lower than 2.0.
### Steps to reproduce the bug
1. Numpy > 2.0
2. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")`
### Expected behavior
It returns a stratified split as per the results of Numpy < 2.0
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.35
- Python version: 3.13.7
- Huggingface_hub version: 0.34.4
- PyArrow version: 19.0.0
- Pandas version: 2.3.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7818/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7818/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7817
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7817/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7817/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7817/events
|
https://github.com/huggingface/datasets/pull/7817
| 3,515,755,952
|
PR_kwDODunzps6tw-GG
| 7,817
|
fix: better args passthrough for `_batch_setitems()`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/58419736?v=4",
"events_url": "https://api.github.com/users/sghng/events{/privacy}",
"followers_url": "https://api.github.com/users/sghng/followers",
"following_url": "https://api.github.com/users/sghng/following{/other_user}",
"gists_url": "https://api.github.com/users/sghng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sghng",
"id": 58419736,
"login": "sghng",
"node_id": "MDQ6VXNlcjU4NDE5NzM2",
"organizations_url": "https://api.github.com/users/sghng/orgs",
"received_events_url": "https://api.github.com/users/sghng/received_events",
"repos_url": "https://api.github.com/users/sghng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sghng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sghng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sghng",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 3
|
2025-10-14 22:51:51+00:00
|
2025-10-16 11:27:20+00:00
|
NaT
|
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7817.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7817",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7817.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7817"
}
|
In Python 3.14, there's a change in the signature of `_Pickler._batch_setitems`.
It's changed to:
```python
# pickle.py
def _batch_setitems(self, items, obj):
# Helper to batch up SETITEMS sequences; proto >= 1 only
save = self.save
write = self.write
```
To accomodate this, in `dill`, we have this compatibility code:
```python
if sys.hexversion < 0x30E00A1:
pickler._batch_setitems(iter(source.items()))
else:
pickler._batch_setitems(iter(source.items()), obj=obj)
```
Thus, the datasets package will emit this error
```
│ /Users/sghuang/mamba/envs/ds/lib/python3.14/site-packages/dill/_dill.py:1262 in save_module_dict │
│ │
│ 1259 │ │ if is_dill(pickler, child=False) and pickler._session: │
│ 1260 │ │ │ # we only care about session the first pass thru │
│ 1261 │ │ │ pickler._first_pass = False │
│ ❱ 1262 │ │ StockPickler.save_dict(pickler, obj) │
│ 1263 │ │ logger.trace(pickler, "# D2") │
│ 1264 │ return │
│ 1265 │
│ │
│ /Users/sghuang/mamba/envs/ds/lib/python3.14/pickle.py:1133 in save_dict │
│ │
│ 1130 │ │ print(f"Line number: {inspect.getsourcelines(method)[1]}") │
│ 1131 │ │ print(f"Full path: {inspect.getmodule(method)}") │
│ 1132 │ │ print(f"Class: {method.__qualname__}") │
│ ❱ 1133 │ │ self._batch_setitems(obj.items(), obj) │
│ 1134 │ │
│ 1135 │ dispatch[dict] = save_dict │
│ 1136 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
[NOTE] when serializing datasets.table.InMemoryTable state
[NOTE] when serializing datasets.table.InMemoryTable object
```
To fix it, we update the signature of the `_batch_setitems` method defined in `utils/_dill.py`.
This fix should be backward compatible, since the compatibility is handled by `dill`.
This should close #7813.
Similar to https://github.com/joblib/joblib/issues/1658.
Related to https://github.com/uqfoundation/dill/pull/724.
| null |
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7817/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7817/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7816
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7816/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7816/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7816/events
|
https://github.com/huggingface/datasets/issues/7816
| 3,512,210,206
|
I_kwDODunzps7RWBMe
| 7,816
|
disable_progress_bar() not working as expected
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4",
"events_url": "https://api.github.com/users/windmaple/events{/privacy}",
"followers_url": "https://api.github.com/users/windmaple/followers",
"following_url": "https://api.github.com/users/windmaple/following{/other_user}",
"gists_url": "https://api.github.com/users/windmaple/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/windmaple",
"id": 5577741,
"login": "windmaple",
"node_id": "MDQ6VXNlcjU1Nzc3NDE=",
"organizations_url": "https://api.github.com/users/windmaple/orgs",
"received_events_url": "https://api.github.com/users/windmaple/received_events",
"repos_url": "https://api.github.com/users/windmaple/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/windmaple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/windmaple/subscriptions",
"type": "User",
"url": "https://api.github.com/users/windmaple",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 2
|
2025-10-14 03:25:39+00:00
|
2025-10-14 23:49:26+00:00
|
2025-10-14 23:49:26+00:00
|
NONE
| null | null | null | null |
### Describe the bug
Hi,
I'm trying to load a dataset on Kaggle TPU image. There is some known compat issue with progress bar on Kaggle, so I'm trying to disable the progress bar globally. This does not work as you can see in [here](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
In contract, disabling progress bar for snapshot_download() works as expected as in [here](https://www.kaggle.com/code/windmaple/snapshot-download-error).
### Steps to reproduce the bug
See this [notebook](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
There is sth. wrong with `shell_paraent`.
### Expected behavior
The downloader should disable progress bar and move forward w/ no error.
### Environment info
The latest version as I did:
!pip install -U datasets ipywidgets ipykernel
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4",
"events_url": "https://api.github.com/users/windmaple/events{/privacy}",
"followers_url": "https://api.github.com/users/windmaple/followers",
"following_url": "https://api.github.com/users/windmaple/following{/other_user}",
"gists_url": "https://api.github.com/users/windmaple/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/windmaple",
"id": 5577741,
"login": "windmaple",
"node_id": "MDQ6VXNlcjU1Nzc3NDE=",
"organizations_url": "https://api.github.com/users/windmaple/orgs",
"received_events_url": "https://api.github.com/users/windmaple/received_events",
"repos_url": "https://api.github.com/users/windmaple/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/windmaple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/windmaple/subscriptions",
"type": "User",
"url": "https://api.github.com/users/windmaple",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7816/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7816/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7815
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7815/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7815/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7815/events
|
https://github.com/huggingface/datasets/pull/7815
| 3,511,338,522
|
PR_kwDODunzps6tiDIT
| 7,815
|
Add nifti support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 0
|
2025-10-13 20:07:32+00:00
|
2025-10-14 17:52:13+00:00
|
NaT
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7815.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7815",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7815.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7815"
}
|
Add support for NIfTI.
supports #7804
This PR follows https://github.com/huggingface/datasets/pull/7325 very closely
I am a bit unsure what we need to add to the `document_dataset.mdx` and `document_load.mdx`. I should probably create a dataset on the hub first to create this guide instead of copy+pasting from PDF.
Open todos:
- [x] create nifti dataset on the hub
- ~[ ] update `document_dataset.mdx` and `document_load.mdx`~
EDIT:
I tested with two datasets I created on the hub:
- https://huggingface.co/datasets/TobiasPitters/test-nifti-unzipped
- https://huggingface.co/datasets/TobiasPitters/test-nifti
for zipped (file extension `.nii.gz` and unzipped `.nii`) files and both seem to work fine. Also tested loading locally and that seems to work as well.
Here is the scriptsthat I ran against the hub:
```python
from pathlib import Path
from datasets import load_dataset
import nibabel as nib
dataset = load_dataset(
"TobiasPitters/test-nifti-unzipped",
split="test" # Load as single Dataset, not DatasetDict
)
print("length dataset unzipped:", len(dataset))
for item in dataset:
isinstance(item["nifti"], nib.nifti1.Nifti1Image)
dataset = load_dataset(
"TobiasPitters/test-nifti",
split="train" # Load as single Dataset, not DatasetDict
)
print("length dataset zipped:", len(dataset))
for item in dataset:
isinstance(item["nifti"], nib.nifti1.Nifti1Image)
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7815/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7815/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7814
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7814/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7814/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7814/events
|
https://github.com/huggingface/datasets/pull/7814
| 3,510,488,792
|
PR_kwDODunzps6tfJCm
| 7,814
|
Allow streaming hdf5 files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-10-13 15:25:44+00:00
|
2025-10-13 15:28:51+00:00
|
2025-10-13 15:28:49+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7814.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7814",
"merged_at": "2025-10-13T15:28:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7814.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7814"
}
|
Add streaming support after https://github.com/huggingface/datasets/pull/7690, cc @klamike :)
## Details
in `datasets` loaders, `open()` is extended to work with files that are on disk but also on HF. Files on HF are streamed using HTTP range requests using the `HfFileSystem` implementation in the `huggingface_hub` library.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7814/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7814/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7813
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7813/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7813/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7813/events
|
https://github.com/huggingface/datasets/issues/7813
| 3,503,446,288
|
I_kwDODunzps7Q0lkQ
| 7,813
|
Caching does not work when using python3.14
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
"events_url": "https://api.github.com/users/intexcor/events{/privacy}",
"followers_url": "https://api.github.com/users/intexcor/followers",
"following_url": "https://api.github.com/users/intexcor/following{/other_user}",
"gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/intexcor",
"id": 142020129,
"login": "intexcor",
"node_id": "U_kgDOCHcOIQ",
"organizations_url": "https://api.github.com/users/intexcor/orgs",
"received_events_url": "https://api.github.com/users/intexcor/received_events",
"repos_url": "https://api.github.com/users/intexcor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/intexcor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/intexcor",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 2
|
2025-10-10 15:36:46+00:00
|
2025-10-14 23:02:02+00:00
|
NaT
|
NONE
| null | null | null | null |
### Describe the bug
Traceback (most recent call last):
File "/workspace/ctn.py", line 8, in <module>
ds = load_dataset(f"naver-clova-ix/synthdog-{lang}") # или "synthdog-zh" для китайского
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
path=path,
...<10 lines>...
**config_kwargs,
)
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1185, in load_dataset_builder
builder_instance._use_legacy_cache_dir_if_possible(dataset_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/builder.py", line 612, in _use_legacy_cache_dir_if_possible
self._check_legacy_cache2(dataset_module) or self._check_legacy_cache() or None
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/builder.py", line 485, in _check_legacy_cache2
config_id = self.config.name + "-" + Hasher.hash({"data_files": self.config.data_files})
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/fingerprint.py", line 188, in hash
return cls.hash_bytes(dumps(value))
~~~~~^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 120, in dumps
dump(obj, file)
~~~~^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 114, in dump
Pickler(file, recurse=True).dump(obj)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 428, in dump
StockPickler.dump(self, obj)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 498, in dump
self.save(obj)
~~~~~~~~~^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 70, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 422, in save
StockPickler.save(self, obj, save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 572, in save
f(self, obj) # Call unbound method with explicit self
~^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 1262, in save_module_dict
StockPickler.save_dict(pickler, obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 1064, in save_dict
self._batch_setitems(obj.items(), obj)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
### Steps to reproduce the bug
ds_train = ds["train"].map(lambda x: {**x, "lang": lang})
### Expected behavior
Fixed bugs
### Environment info
- `datasets` version: 4.2.0
- Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.39
- Python version: 3.14.0
- `huggingface_hub` version: 0.35.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7813/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7813/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7812
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7812/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7812/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7812/events
|
https://github.com/huggingface/datasets/pull/7812
| 3,500,901,422
|
PR_kwDODunzps6s_New
| 7,812
|
docs: document_dataset PDFs & OCR
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/34215814?v=4",
"events_url": "https://api.github.com/users/ethanknights/events{/privacy}",
"followers_url": "https://api.github.com/users/ethanknights/followers",
"following_url": "https://api.github.com/users/ethanknights/following{/other_user}",
"gists_url": "https://api.github.com/users/ethanknights/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ethanknights",
"id": 34215814,
"login": "ethanknights",
"node_id": "MDQ6VXNlcjM0MjE1ODE0",
"organizations_url": "https://api.github.com/users/ethanknights/orgs",
"received_events_url": "https://api.github.com/users/ethanknights/received_events",
"repos_url": "https://api.github.com/users/ethanknights/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ethanknights/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethanknights/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ethanknights",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 0
|
2025-10-09 23:31:41+00:00
|
2025-10-09 23:31:41+00:00
|
NaT
|
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7812.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7812",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7812.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7812"
}
|
Use acronyms consistently across document_dataset docs.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7812/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7812/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7811
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7811/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7811/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7811/events
|
https://github.com/huggingface/datasets/issues/7811
| 3,500,741,658
|
I_kwDODunzps7QqRQa
| 7,811
|
SIGSEGV when Python exits due to near null deref
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5192353?v=4",
"events_url": "https://api.github.com/users/iankronquist/events{/privacy}",
"followers_url": "https://api.github.com/users/iankronquist/followers",
"following_url": "https://api.github.com/users/iankronquist/following{/other_user}",
"gists_url": "https://api.github.com/users/iankronquist/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iankronquist",
"id": 5192353,
"login": "iankronquist",
"node_id": "MDQ6VXNlcjUxOTIzNTM=",
"organizations_url": "https://api.github.com/users/iankronquist/orgs",
"received_events_url": "https://api.github.com/users/iankronquist/received_events",
"repos_url": "https://api.github.com/users/iankronquist/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iankronquist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iankronquist/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iankronquist",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 4
|
2025-10-09 22:00:11+00:00
|
2025-10-10 22:09:24+00:00
|
NaT
|
NONE
| null | null | null | null |
### Describe the bug
When I run the following python script using datasets I get a segfault.
```python
from datasets import load_dataset
from tqdm import tqdm
progress_bar = tqdm(total=(1000), unit='cols', desc='cols ')
progress_bar.update(1)
```
```
% lldb -- python3 crashmin.py
(lldb) target create "python3"
Current executable set to '/Users/ian/bug/venv/bin/python3' (arm64).
(lldb) settings set -- target.run-args "crashmin.py"
(lldb) r
Process 8095 launched: '/Users/ian/bug/venv/bin/python3' (arm64)
Process 8095 stopped
* thread #2, stop reason = exec
frame #0: 0x0000000100014b30 dyld`_dyld_start
dyld`_dyld_start:
-> 0x100014b30 <+0>: mov x0, sp
0x100014b34 <+4>: and sp, x0, #0xfffffffffffffff0
0x100014b38 <+8>: mov x29, #0x0 ; =0
Target 0: (Python) stopped.
(lldb) c
Process 8095 resuming
cols : 0% 0/1000 [00:00<?, ?cols/s]Process 8095 stopped
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10)
frame #0: 0x0000000101783454 _datetime.cpython-313-darwin.so`delta_new + 188
_datetime.cpython-313-darwin.so`delta_new:
-> 0x101783454 <+188>: ldr x3, [x20, #0x10]
0x101783458 <+192>: adrp x0, 10
0x10178345c <+196>: add x0, x0, #0x6fc ; "seconds"
Target 0: (Python) stopped.
(lldb) bt
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10)
* frame #0: 0x0000000101783454 _datetime.cpython-313-darwin.so`delta_new + 188
frame #1: 0x0000000100704b60 Python`type_call + 96
frame #2: 0x000000010067ba34 Python`_PyObject_MakeTpCall + 120
frame #3: 0x00000001007aae3c Python`_PyEval_EvalFrameDefault + 30236
frame #4: 0x000000010067c900 Python`PyObject_CallOneArg + 112
frame #5: 0x000000010070f0a0 Python`slot_tp_finalize + 116
frame #6: 0x000000010070c3b4 Python`subtype_dealloc + 788
frame #7: 0x00000001006c378c Python`insertdict + 756
frame #8: 0x00000001006db2b0 Python`_PyModule_ClearDict + 660
frame #9: 0x000000010080a9a8 Python`finalize_modules + 1772
frame #10: 0x0000000100809a44 Python`_Py_Finalize + 264
frame #11: 0x0000000100837630 Python`Py_RunMain + 252
frame #12: 0x0000000100837ef8 Python`pymain_main + 304
frame #13: 0x0000000100837f98 Python`Py_BytesMain + 40
frame #14: 0x000000019cfcc274 dyld`start + 2840
(lldb) register read x20
x20 = 0x0000000000000000
(lldb)
```
### Steps to reproduce the bug
Run the script above, and observe the segfault.
### Expected behavior
No segfault
### Environment info
```
% pip freeze datasets | grep -i datasets
datasets==4.2.0
(venv) 0 ~/bug 14:58:06
% pip freeze tqdm | grep -i tqdm
tqdm==4.67.1
(venv) 0 ~/bug 14:58:16
% python --version
Python 3.13.7
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7811/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7811/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7810
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7810/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7810/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7810/events
|
https://github.com/huggingface/datasets/pull/7810
| 3,499,855,569
|
PR_kwDODunzps6s7wHa
| 7,810
|
fix conda deps
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-10-09 16:32:04+00:00
|
2025-10-09 16:35:15+00:00
|
2025-10-09 16:35:14+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7810.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7810",
"merged_at": "2025-10-09T16:35:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7810.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7810"
}
| null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7810/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7810/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7809
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7809/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7809/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7809/events
|
https://github.com/huggingface/datasets/pull/7809
| 3,499,811,179
|
PR_kwDODunzps6s7mwb
| 7,809
|
Set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-10-09 16:19:19+00:00
|
2025-10-09 16:22:12+00:00
|
2025-10-09 16:19:31+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7809.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7809",
"merged_at": "2025-10-09T16:19:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7809.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7809"
}
| null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7809/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7809/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7808
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7808/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7808/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7808/events
|
https://github.com/huggingface/datasets/pull/7808
| 3,499,779,993
|
PR_kwDODunzps6s7gBq
| 7,808
|
release: 4.2.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-10-09 16:10:53+00:00
|
2025-10-09 16:21:01+00:00
|
2025-10-09 16:11:08+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7808.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7808",
"merged_at": "2025-10-09T16:11:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7808.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7808"
}
| null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7808/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7808/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7807
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7807/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7807/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7807/events
|
https://github.com/huggingface/datasets/pull/7807
| 3,499,765,725
|
PR_kwDODunzps6s7c_U
| 7,807
|
typo
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-10-09 16:06:47+00:00
|
2025-10-09 16:16:31+00:00
|
2025-10-09 16:06:58+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7807.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7807",
"merged_at": "2025-10-09T16:06:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7807.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7807"
}
|
add an s to be consistent with pandas' on_bad_lines
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7807/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7807/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7806
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7806/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7806/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7806/events
|
https://github.com/huggingface/datasets/pull/7806
| 3,499,483,246
|
PR_kwDODunzps6s6gnr
| 7,806
|
Parquet: add `on_bad_file` argument to error/warn/skip bad files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-10-09 14:41:46+00:00
|
2025-10-09 16:04:35+00:00
|
2025-10-09 16:04:33+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7806.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7806",
"merged_at": "2025-10-09T16:04:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7806.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7806"
}
|
```python
from datasets import load_dataset
on_bad_file = "error" # default
# on_bad_file = "warn" # warn and skip bad file
# on_bad_file = "skip" # skip bad file
ds = load_dataset(parquet_dataset_id, on_bad_file=on_bad_file)
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7806/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7806/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7805
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7805/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7805/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7805/events
|
https://github.com/huggingface/datasets/pull/7805
| 3,499,286,947
|
PR_kwDODunzps6s52Ew
| 7,805
|
Less api calls when resolving data_files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-10-09 13:53:06+00:00
|
2025-10-09 14:01:57+00:00
|
2025-10-09 14:01:56+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7805.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7805",
"merged_at": "2025-10-09T14:01:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7805.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7805"
}
|
There ~10 unnecessary `/api/datasets/user/dataset/revision`calls due to multithreading in data files resolution.
I disabled multithreading, which was actually not useful anymore since `HfFileSystem` has been using caching for a while now.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7805/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7805/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7804
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7804/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7804/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7804/events
|
https://github.com/huggingface/datasets/issues/7804
| 3,498,534,596
|
I_kwDODunzps7Qh2bE
| 7,804
|
Support scientific data formats
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 1
|
2025-10-09 10:18:24+00:00
|
2025-10-10 11:26:23+00:00
|
NaT
|
MEMBER
| null | null | null | null |
List of formats and libraries we can use to load the data in `datasets`:
- [ ] DICOMs: pydicom
- [ ] NIfTIs: nibabel
- [ ] WFDB: wfdb
cc @zaRizk7 for viz
Feel free to comment / suggest other formats and libs you'd like to see or to share your interest in one of the mentioned format
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7804/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7804/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7803
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7803/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7803/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7803/events
|
https://github.com/huggingface/datasets/pull/7803
| 3,498,395,879
|
PR_kwDODunzps6s2zyO
| 7,803
|
More Parquet streaming docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-10-09 09:39:11+00:00
|
2025-10-09 10:01:46+00:00
|
2025-10-09 10:01:43+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7803.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7803",
"merged_at": "2025-10-09T10:01:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7803.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7803"
}
| null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7803/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7803/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7802
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7802/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7802/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7802/events
|
https://github.com/huggingface/datasets/issues/7802
| 3,497,454,119
|
I_kwDODunzps7Qduon
| 7,802
|
[Docs] Missing documentation for `Dataset.from_dict`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/69421545?v=4",
"events_url": "https://api.github.com/users/aaronshenhao/events{/privacy}",
"followers_url": "https://api.github.com/users/aaronshenhao/followers",
"following_url": "https://api.github.com/users/aaronshenhao/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronshenhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aaronshenhao",
"id": 69421545,
"login": "aaronshenhao",
"node_id": "MDQ6VXNlcjY5NDIxNTQ1",
"organizations_url": "https://api.github.com/users/aaronshenhao/orgs",
"received_events_url": "https://api.github.com/users/aaronshenhao/received_events",
"repos_url": "https://api.github.com/users/aaronshenhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aaronshenhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronshenhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aaronshenhao",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 2
|
2025-10-09 02:54:41+00:00
|
2025-10-19 16:09:33+00:00
|
NaT
|
NONE
| null | null | null | null |
Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes
Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029
The docstring is present for the function, but seems missing from the official documentation for the `Dataset` class on HuggingFace.
The method in question:
```python
@classmethod
def from_dict(
cls,
mapping: dict,
features: Optional[Features] = None,
info: Optional[DatasetInfo] = None,
split: Optional[NamedSplit] = None,
) -> "Dataset":
"""
Convert `dict` to a `pyarrow.Table` to create a [`Dataset`].
Important: a dataset created with from_dict() lives in memory
and therefore doesn't have an associated cache directory.
This may change in the future, but in the meantime if you
want to reduce memory usage you should write it back on disk
and reload using e.g. save_to_disk / load_from_disk.
Args:
mapping (`Mapping`):
Mapping of strings to Arrays or Python lists.
features ([`Features`], *optional*):
Dataset features.
info (`DatasetInfo`, *optional*):
Dataset information, like description, citation, etc.
split (`NamedSplit`, *optional*):
Name of the dataset split.
Returns:
[`Dataset`]
"""
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7802/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7802/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7801
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7801/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7801/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7801/events
|
https://github.com/huggingface/datasets/pull/7801
| 3,496,388,063
|
PR_kwDODunzps6swITn
| 7,801
|
Add parquet scan options and docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-10-08 18:04:52+00:00
|
2025-10-09 07:55:58+00:00
|
2025-10-09 07:55:56+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7801.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7801",
"merged_at": "2025-10-09T07:55:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7801.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7801"
}
|
I added scan options, useful to control buffering and caching when streaming and docs, including how to select a subset of columns and apply filters
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7801/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7801/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7800
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7800/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7800/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7800/events
|
https://github.com/huggingface/datasets/pull/7800
| 3,494,747,495
|
PR_kwDODunzps6sqkmT
| 7,800
|
Fix polars cast column image
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 4
|
2025-10-08 10:01:18+00:00
|
2025-10-18 13:48:37+00:00
|
2025-10-13 14:39:47+00:00
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7800.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7800",
"merged_at": "2025-10-13T14:39:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7800.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7800"
}
|
Fixes #7765
The problem here is that polars uses pyarrow large_string for images, while pandas and others just use the string type. This PR solves that and adds a test.
```python
import polars as pl
from datasets import Dataset
import pandas as pd
import pyarrow as pa
from pathlib import Path
shared_datadir = Path("tests/features/data")
image_path = str(shared_datadir / "test_image_rgb.jpg")
# Load via polars
df_polars = pl.DataFrame({"image_path": [image_path]})
dataset_polars = Dataset.from_polars(df_polars)
print("Polars DF is large string:", pa.types.is_large_string(df_polars.to_arrow().schema[0].type))
print("Polars DF is string:", pa.types.is_string(df_polars.to_arrow().schema[0].type))
# Load via pandas
df_pandas = pd.DataFrame({"image_path": [image_path]})
dataset_pandas = Dataset.from_pandas(df_pandas)
arrow_table_pd = pa.Table.from_pandas(df_pandas)
print("Pandas DF is large string", pa.types.is_large_string(arrow_table_pd.schema[0].type))
print("Pandas DF is string", pa.types.is_string(arrow_table_pd.schema[0].type))
```
Outputs:
```bash
Polars DF is large string: True
Polars DF is string: False
Pandas DF is large string False
Pandas DF is string True
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7800/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7800/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7799
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7799/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7799/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7799/events
|
https://github.com/huggingface/datasets/pull/7799
| 3,487,791,741
|
PR_kwDODunzps6sTJKA
| 7,799
|
Define CI future
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-10-06 15:15:45+00:00
|
2025-10-07 14:30:21+00:00
|
2025-10-07 14:30:19+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7799.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7799",
"merged_at": "2025-10-07T14:30:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7799.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7799"
}
|
this should fix the CI which currently uses transformers on 3.9 while it's now unsupported
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7799/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7799/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7798
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7798/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7798/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7798/events
|
https://github.com/huggingface/datasets/issues/7798
| 3,484,470,782
|
I_kwDODunzps7PsM3-
| 7,798
|
Audio dataset is not decoding on 4.1.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/61390950?v=4",
"events_url": "https://api.github.com/users/thewh1teagle/events{/privacy}",
"followers_url": "https://api.github.com/users/thewh1teagle/followers",
"following_url": "https://api.github.com/users/thewh1teagle/following{/other_user}",
"gists_url": "https://api.github.com/users/thewh1teagle/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thewh1teagle",
"id": 61390950,
"login": "thewh1teagle",
"node_id": "MDQ6VXNlcjYxMzkwOTUw",
"organizations_url": "https://api.github.com/users/thewh1teagle/orgs",
"received_events_url": "https://api.github.com/users/thewh1teagle/received_events",
"repos_url": "https://api.github.com/users/thewh1teagle/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thewh1teagle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thewh1teagle/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thewh1teagle",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 3
|
2025-10-05 06:37:50+00:00
|
2025-10-06 14:07:55+00:00
|
NaT
|
NONE
| null | null | null | null |
### Describe the bug
The audio column remain as non-decoded objects even when accessing them.
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
Works fine with `datasets==3.6.0`
Followed the docs in
- https://huggingface.co/docs/datasets/en/audio_load
### Steps to reproduce the bug
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
### Expected behavior
It should decode when accessing the elemenet
### Environment info
4.1.1
ubuntu 22.04
Related
- https://github.com/huggingface/datasets/issues/7707
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7798/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7798/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7797
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7797/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7797/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7797/events
|
https://github.com/huggingface/datasets/pull/7797
| 3,473,011,621
|
PR_kwDODunzps6rhtf_
| 7,797
|
Datasets: Add WMT21 & WMT22 loaders (basic TSV loaders, sample data, tests)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/164366940?v=4",
"events_url": "https://api.github.com/users/tanisha-samant/events{/privacy}",
"followers_url": "https://api.github.com/users/tanisha-samant/followers",
"following_url": "https://api.github.com/users/tanisha-samant/following{/other_user}",
"gists_url": "https://api.github.com/users/tanisha-samant/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tanisha-samant",
"id": 164366940,
"login": "tanisha-samant",
"node_id": "U_kgDOCcwKXA",
"organizations_url": "https://api.github.com/users/tanisha-samant/orgs",
"received_events_url": "https://api.github.com/users/tanisha-samant/received_events",
"repos_url": "https://api.github.com/users/tanisha-samant/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tanisha-samant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanisha-samant/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tanisha-samant",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-10-01 10:46:01+00:00
|
2025-10-10 15:33:25+00:00
|
2025-10-10 15:33:25+00:00
|
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7797.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7797",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7797.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7797"
}
|
- Implemented TSV-based dataset loaders:
- WMT21Dataset (local_datasets/wmt21/wmt21_dataset.py)
- WMT22Dataset (local_datasets/wmt22/wmt22_dataset.py)
These classes load source-target pairs from .tsv files for train, validation, and test splits.
- Created sample dummy data for both datasets:
- dummy_data/train.tsv, dummy_data/validation.tsv, dummy_data/test.tsv
- Includes a few realistic example lines to allow CI and local tests to pass without downloading full datasets.
- Added automated tests for robust validation:
- tests/test_wmt21.py and tests/test_wmt22.py
- Checks that all splits load correctly, empty lines are ignored, and the number of examples matches the number of lines in the .tsv files.
- Edge cases handled: empty lines, malformed lines, extra tabs.
- Added README.md files for both datasets:
- Provides dataset structure, usage instructions, and placeholders for citation & license information.
- Ensures that other developers and reviewers can understand dataset usage immediately.
- Ensured easy local testing:
- Load datasets programmatically using WMT21Dataset / WMT22Dataset.
- Verified train/validation/test splits are correctly returned as Python dictionaries of Dataset objects.
-Provides initial support for WMT21 and WMT22 NLP/translation experiments.
-Allows contributors and reviewers to test dataset loading locally or in CI without downloading large datasets.
-Serves as a template to extend to other WMT datasets in the future.
Testing instructions:
```
# Activate your environment
pytest tests/test_wmt21.py -v
pytest tests/test_wmt22.py -v
```
```
from local_datasets.wmt21.wmt21_dataset import WMT21Dataset
from local_datasets.wmt22.wmt22_dataset import WMT22Dataset
# WMT21
wmt21 = WMT21Dataset("local_datasets/wmt21/dummy_data")
ds21 = wmt21.load()
print(ds21["train"][0])
print(ds21["validation"][0])
print(ds21["test"][0])
# WMT22
wmt22 = WMT22Dataset("local_datasets/wmt22/dummy_data")
ds22 = wmt22.load()
print(ds22["train"][0])
print(ds22["validation"][0])
print(ds22["test"][0])
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7797/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7797/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7796
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7796/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7796/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7796/events
|
https://github.com/huggingface/datasets/pull/7796
| 3,470,616,799
|
PR_kwDODunzps6rZjrW
| 7,796
|
Docs: fix typo, improve readability, add code comments
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/164366940?v=4",
"events_url": "https://api.github.com/users/tanisha-samant/events{/privacy}",
"followers_url": "https://api.github.com/users/tanisha-samant/followers",
"following_url": "https://api.github.com/users/tanisha-samant/following{/other_user}",
"gists_url": "https://api.github.com/users/tanisha-samant/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tanisha-samant",
"id": 164366940,
"login": "tanisha-samant",
"node_id": "U_kgDOCcwKXA",
"organizations_url": "https://api.github.com/users/tanisha-samant/orgs",
"received_events_url": "https://api.github.com/users/tanisha-samant/received_events",
"repos_url": "https://api.github.com/users/tanisha-samant/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tanisha-samant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanisha-samant/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tanisha-samant",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 0
|
2025-09-30 18:34:16+00:00
|
2025-10-10 18:44:12+00:00
|
NaT
|
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7796.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7796",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7796.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7796"
}
|
What I did:
- Fixed a small typo in README to improve clarity
- Fixed repeated word "frameworks frameworks"
- Split long paragraphs into shorter sentences for readability
- Added # Example comments before code blocks for clarity
Why:
- Helps new users avoid confusion
How I tested:
- Checked locally in Markdown preview
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7796/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7796/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7795
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7795/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7795/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7795/events
|
https://github.com/huggingface/datasets/pull/7795
| 3,463,990,654
|
PR_kwDODunzps6rDEce
| 7,795
|
Add pyarrow's binary view to features
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6834061?v=4",
"events_url": "https://api.github.com/users/delta003/events{/privacy}",
"followers_url": "https://api.github.com/users/delta003/followers",
"following_url": "https://api.github.com/users/delta003/following{/other_user}",
"gists_url": "https://api.github.com/users/delta003/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/delta003",
"id": 6834061,
"login": "delta003",
"node_id": "MDQ6VXNlcjY4MzQwNjE=",
"organizations_url": "https://api.github.com/users/delta003/orgs",
"received_events_url": "https://api.github.com/users/delta003/received_events",
"repos_url": "https://api.github.com/users/delta003/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/delta003/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/delta003/subscriptions",
"type": "User",
"url": "https://api.github.com/users/delta003",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 2
|
2025-09-29 09:12:55+00:00
|
2025-10-10 16:04:21+00:00
|
2025-10-10 16:04:21+00:00
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7795.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7795",
"merged_at": "2025-10-10T16:04:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7795.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7795"
}
|
Basically https://github.com/huggingface/datasets/pull/7718 just for binary view instead of string view
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7795/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7795/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7794
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7794/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7794/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7794/events
|
https://github.com/huggingface/datasets/pull/7794
| 3,460,793,966
|
PR_kwDODunzps6q4XyU
| 7,794
|
Fix nested data conversions error in parquet loading (fixes #7793)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/41635755?v=4",
"events_url": "https://api.github.com/users/Aishwarya0811/events{/privacy}",
"followers_url": "https://api.github.com/users/Aishwarya0811/followers",
"following_url": "https://api.github.com/users/Aishwarya0811/following{/other_user}",
"gists_url": "https://api.github.com/users/Aishwarya0811/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Aishwarya0811",
"id": 41635755,
"login": "Aishwarya0811",
"node_id": "MDQ6VXNlcjQxNjM1NzU1",
"organizations_url": "https://api.github.com/users/Aishwarya0811/orgs",
"received_events_url": "https://api.github.com/users/Aishwarya0811/received_events",
"repos_url": "https://api.github.com/users/Aishwarya0811/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Aishwarya0811/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aishwarya0811/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Aishwarya0811",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 6
|
2025-09-27 22:04:13+00:00
|
2025-10-01 16:56:20+00:00
|
NaT
|
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7794.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7794",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7794.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7794"
}
|
Fixes #7793
## Problem
Loading datasets with deeply nested structures (like `metr-evals/malt-public`) fails with:
ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
This occurs when parquet files contain nested data (lists, structs, maps) that exceed PyArrow's 16MB chunk limit.
## Root Cause
PyArrow's C++ implementation explicitly rejects nested data conversions when data is split across multiple chunks. The limitation exists in the `WrapIntoListArray` function where repetition levels cannot be reconstructed across chunk boundaries.
## Solution
- **Fallback mechanism**: Catches the specific PyArrow error and switches to non-chunked reading
- **Selective optimization**: Only combines chunks for problematic nested columns to minimize memory impact
- **Manual batching**: Maintains batching behavior even in fallback mode
- **Backward compatibility**: Zero impact on existing datasets
## Implementation Details
- Added `_is_nested_type()` helper to detect nested PyArrow types
- Added `_handle_nested_chunked_conversion()` for selective chunk combining
- Modified `_generate_tables()` to catch and handle the specific error
- Preserves all existing error handling and logging
## Testing
- [x] No regressions: Normal parquet datasets continue working
- [x] Code follows existing patterns in the datasets codebase
- [x] tested by original reporter (gated dataset access needed)
**Note**: This fix is based on thorough research of PyArrow limitations and similar issues in the ecosystem. While we cannot test with the original dataset due to access restrictions, the implementation follows established patterns for handling this PyArrow limitation.
## Request for Testing
@neevparikh Could you please test this fix with your original failing dataset? The implementation should resolve the nested data conversion error you encountered.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7794/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7794/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7793
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7793/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7793/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7793/events
|
https://github.com/huggingface/datasets/issues/7793
| 3,459,496,971
|
I_kwDODunzps7OM7wL
| 7,793
|
Cannot load dataset, fails with nested data conversions not implemented for chunked array outputs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/41182432?v=4",
"events_url": "https://api.github.com/users/neevparikh/events{/privacy}",
"followers_url": "https://api.github.com/users/neevparikh/followers",
"following_url": "https://api.github.com/users/neevparikh/following{/other_user}",
"gists_url": "https://api.github.com/users/neevparikh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neevparikh",
"id": 41182432,
"login": "neevparikh",
"node_id": "MDQ6VXNlcjQxMTgyNDMy",
"organizations_url": "https://api.github.com/users/neevparikh/orgs",
"received_events_url": "https://api.github.com/users/neevparikh/received_events",
"repos_url": "https://api.github.com/users/neevparikh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neevparikh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neevparikh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neevparikh",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 1
|
2025-09-27 01:03:12+00:00
|
2025-09-27 21:35:31+00:00
|
NaT
|
NONE
| null | null | null | null |
### Describe the bug
Hi! When I load this dataset, it fails with a pyarrow error. I'm using datasets 4.1.1, though I also see this with datasets 4.1.2
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
Error:
```
Traceback (most recent call last):
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1815, in _prepare_split_single
for _, table in generator:
^^^^^^^^^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/packaged_modules/parquet/parquet.py", line 93, in _generate_tables
for batch_idx, record_batch in enumerate(
~~~~~~~~~^
parquet_fragment.to_batches(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
)
^
):
^
File "pyarrow/_dataset.pyx", line 3904, in _iterator
File "pyarrow/_dataset.pyx", line 3494, in pyarrow._dataset.TaggedRecordBatchIterator.__next__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/neev/scratch/test_hf.py", line 3, in <module>
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/load.py", line 1412, in load_dataset
builder_instance.download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
download_config=download_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
storage_options=storage_options,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
dl_manager=dl_manager,
^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
**download_and_prepare_kwargs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
):
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
### Expected behavior
The dataset loads
### Environment info
Datasets: 4.1.1
Python: 3.13
Platform: Macos
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7793/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7793/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7792
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7792/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7792/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7792/events
|
https://github.com/huggingface/datasets/issues/7792
| 3,456,802,210
|
I_kwDODunzps7OCp2i
| 7,792
|
Concatenate IterableDataset instances and distribute underlying shards in a RoundRobin manner
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13559010?v=4",
"events_url": "https://api.github.com/users/LTMeyer/events{/privacy}",
"followers_url": "https://api.github.com/users/LTMeyer/followers",
"following_url": "https://api.github.com/users/LTMeyer/following{/other_user}",
"gists_url": "https://api.github.com/users/LTMeyer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LTMeyer",
"id": 13559010,
"login": "LTMeyer",
"node_id": "MDQ6VXNlcjEzNTU5MDEw",
"organizations_url": "https://api.github.com/users/LTMeyer/orgs",
"received_events_url": "https://api.github.com/users/LTMeyer/received_events",
"repos_url": "https://api.github.com/users/LTMeyer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LTMeyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LTMeyer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LTMeyer",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null | 17
|
2025-09-26 10:05:19+00:00
|
2025-10-15 18:05:23+00:00
|
2025-10-15 18:05:23+00:00
|
NONE
| null | null | null | null |
### Feature request
I would like to be able to concatenate multiple `IterableDataset` with possibly different features. I would like to then be able to stream the results in parallel (both using DDP and multiple workers in the pytorch DataLoader). I want the merge of datasets to be well balanced between the different processes.
### Motivation
I want to train a model on a combination of datasets, which I can convert to a single representation. This applies to converting different datasets items to the same Python class, as using a tokenizer on multiple modalities.
Assuming that my original datasets are not necessarily well balanced as they may have different size and thus different number of shards, I would like the merged dataset to be distributed evenly over the multiple processes. I don't mind if it's not perfectly balanced, and as result, some workers of the torch DataLoader do nothing, as long as the DDP is properly handled causing no deadlock.
### What I've tried
I've tried the two functions already provided in datasets, namely `interleave_datasets` and `concatenate_datasets`.
- Interleave seems to be the best approach of what I'm trying to do. However, it doesn't suit my purpose because as I understand it, it stops as soon as one of the dataset source is exhausted, or repeat the smallest source items until the largest is exhausted. I would like something in-between, similarly to what [roundrobin does](https://more-itertools.readthedocs.io/en/stable/api.html#more_itertools.roundrobin).
- Concatenate does not mix the data enough and one dataset may be overrepresented in some early batches.
Let's consider we have 3 datasets composed of different number of shards as follow [[s0_0, s0_1], [s1_0], [s2_0, s2_1, s2_3]], where s denotes the underlying shard, the first index the dataset and the second the shard number.
If we request 3 shards in the `shard_data_source` we should obtain the following:
index 0 gets s0_0 s2_0
index 1 gets s0_1 s2_1
index 2 gets s1_0 s2_3
I started implementing the following, but I'm afraid my sharding logic is incorrect.
```python
from copy import deepcopy
from itertools import chain, islice
import datasets
import numpy as np
from datasets import IterableDataset
from datasets.iterable_dataset import _BaseExamplesIterable
from more_itertools import roundrobin
class MixMultiSourcesExampleIterable(_BaseExamplesIterable):
def __init__(self, ex_iterables: list[_BaseExamplesIterable]):
super().__init__()
self.ex_iterables = ex_iterables
def _init_state_dict(self) -> dict:
self._state_dict = {
"ex_iterables": [ex_iterable._init_state_dict() for ex_iterable in self.ex_iterables],
"type": self.__class__.__name__,
}
return self._state_dict
@property
def num_shards(self) -> int:
return sum(ex_iterable.num_shards for ex_iterable in self.ex_iterables)
def __iter__(self):
yield from roundrobin(*self.ex_iterables)
def shuffle_data_sources(self, generator: np.random.Generator) -> "MixMultiSourcesExampleIterable":
"""Shuffle the list of examples iterable, as well as each underlying examples iterable."""
rng = deepcopy(generator)
ex_iterables = list(self.ex_iterables)
rng.shuffle(ex_iterables)
ex_iterables = [ex_iterable.shuffle_data_sources(generator) for ex_iterable in ex_iterables]
return MixMultiSourcesExampleIterable(ex_iterables)
def shard_data_sources(self, num_shards: int, index: int, contiguous=True) -> "MixMultiSourceExampleIterable":
"""Shard the underlying iterables in a roundrobin manner.
Let's consider we have our iterables as [[s0_0, s0_1], [s1_0], [s2_0, s2_1, s2_3]],
and we request 3 shards.
index 0 gets s0_0 s2_0
index 1 gets s0_1 s2_1
index 2 gets s1_0 s2_3
"""
return MixMultiSourcesExampleIterable(
list(
islice(
# flatten all underlying iterables
chain.from_iterable([ex_iterable.shard_data_sources(1, 0) for ex_iterable in self.ex_iterables]),
# offset the starting point by the index
index,
# take over the full list, so exhaust the iterators
None,
# step by the number of shards requested
num_shards,
)
)
)
def mix_dataset(iterable_datasets: list[datasets.IterableDataset]) -> IterableDataset:
ex_iterable = MixMultiSourcesExampleIterable([ds._ex_iterable for ds in iterable_datasets])
return IterableDataset(
ex_iterable, distributed=iterable_datasets[0]._distributed, formatting=iterable_datasets[0]._formatting
)
```
### Questions
- Am I missing something? Is there a way to use `interleave_datasets` or `concatenate_datasets` to fit my purpose?
- Would it be the right approach to spread the maximum number of underlying shards across my different processes?
### Your contribution
As much as I can.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13559010?v=4",
"events_url": "https://api.github.com/users/LTMeyer/events{/privacy}",
"followers_url": "https://api.github.com/users/LTMeyer/followers",
"following_url": "https://api.github.com/users/LTMeyer/following{/other_user}",
"gists_url": "https://api.github.com/users/LTMeyer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LTMeyer",
"id": 13559010,
"login": "LTMeyer",
"node_id": "MDQ6VXNlcjEzNTU5MDEw",
"organizations_url": "https://api.github.com/users/LTMeyer/orgs",
"received_events_url": "https://api.github.com/users/LTMeyer/received_events",
"repos_url": "https://api.github.com/users/LTMeyer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LTMeyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LTMeyer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LTMeyer",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7792/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7792/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7791
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7791/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7791/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7791/events
|
https://github.com/huggingface/datasets/pull/7791
| 3,454,046,306
|
PR_kwDODunzps6qh_2W
| 7,791
|
fix: add `num_proc` argument to `Dataset.to_sql`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/100021446?v=4",
"events_url": "https://api.github.com/users/EricSaikali/events{/privacy}",
"followers_url": "https://api.github.com/users/EricSaikali/followers",
"following_url": "https://api.github.com/users/EricSaikali/following{/other_user}",
"gists_url": "https://api.github.com/users/EricSaikali/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/EricSaikali",
"id": 100021446,
"login": "EricSaikali",
"node_id": "U_kgDOBfY0xg",
"organizations_url": "https://api.github.com/users/EricSaikali/orgs",
"received_events_url": "https://api.github.com/users/EricSaikali/received_events",
"repos_url": "https://api.github.com/users/EricSaikali/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/EricSaikali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EricSaikali/subscriptions",
"type": "User",
"url": "https://api.github.com/users/EricSaikali",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 2
|
2025-09-25 15:02:46+00:00
|
2025-10-18 13:21:16+00:00
|
NaT
|
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7791.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7791",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7791.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7791"
}
|
**Task Done:**
- Resolve issue #7788 : Add the missing argument mapping in Dataset.to_sql (`src/datasets/arrow_dataset.py`)
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7791/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7791/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7790
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7790/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7790/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7790/events
|
https://github.com/huggingface/datasets/pull/7790
| 3,453,679,876
|
PR_kwDODunzps6qgvjv
| 7,790
|
update tips in docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 2
|
2025-09-25 13:36:02+00:00
|
2025-09-25 13:39:28+00:00
|
2025-09-25 13:39:22+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7790.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7790",
"merged_at": "2025-09-25T13:39:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7790.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7790"
}
| null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7790/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7790/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7789
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7789/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7789/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7789/events
|
https://github.com/huggingface/datasets/pull/7789
| 3,453,273,059
|
PR_kwDODunzps6qfZUc
| 7,789
|
fix link for rotten_tomatoes dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8176079?v=4",
"events_url": "https://api.github.com/users/0xmohit/events{/privacy}",
"followers_url": "https://api.github.com/users/0xmohit/followers",
"following_url": "https://api.github.com/users/0xmohit/following{/other_user}",
"gists_url": "https://api.github.com/users/0xmohit/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/0xmohit",
"id": 8176079,
"login": "0xmohit",
"node_id": "MDQ6VXNlcjgxNzYwNzk=",
"organizations_url": "https://api.github.com/users/0xmohit/orgs",
"received_events_url": "https://api.github.com/users/0xmohit/received_events",
"repos_url": "https://api.github.com/users/0xmohit/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/0xmohit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0xmohit/subscriptions",
"type": "User",
"url": "https://api.github.com/users/0xmohit",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 0
|
2025-09-25 11:51:36+00:00
|
2025-09-25 11:51:36+00:00
|
NaT
|
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7789.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7789",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7789.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7789"
}
|
The current link leads to a 404 page.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7789/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7789/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7788
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7788/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7788/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7788/events
|
https://github.com/huggingface/datasets/issues/7788
| 3,450,913,796
|
I_kwDODunzps7NsMQE
| 7,788
|
`Dataset.to_sql` doesn't utilize `num_proc`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30357072?v=4",
"events_url": "https://api.github.com/users/tcsmaster/events{/privacy}",
"followers_url": "https://api.github.com/users/tcsmaster/followers",
"following_url": "https://api.github.com/users/tcsmaster/following{/other_user}",
"gists_url": "https://api.github.com/users/tcsmaster/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tcsmaster",
"id": 30357072,
"login": "tcsmaster",
"node_id": "MDQ6VXNlcjMwMzU3MDcy",
"organizations_url": "https://api.github.com/users/tcsmaster/orgs",
"received_events_url": "https://api.github.com/users/tcsmaster/received_events",
"repos_url": "https://api.github.com/users/tcsmaster/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tcsmaster/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tcsmaster/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tcsmaster",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 0
|
2025-09-24 20:34:47+00:00
|
2025-09-24 20:35:01+00:00
|
NaT
|
NONE
| null | null | null | null |
The underlying `SqlDatasetWriter` has `num_proc` as an available argument [here](https://github.com/huggingface/datasets/blob/5dc1a179783dff868b0547c8486268cfaea1ea1f/src/datasets/io/sql.py#L63) , but `Dataset.to_sql()` does not accept it, therefore it is always using one process for the SQL conversion.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7788/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7788/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7787
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7787/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7787/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7787/events
|
https://github.com/huggingface/datasets/pull/7787
| 3,450,858,674
|
PR_kwDODunzps6qXRo-
| 7,787
|
feat: avoid some copies in torch formatter
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9896130?v=4",
"events_url": "https://api.github.com/users/drbh/events{/privacy}",
"followers_url": "https://api.github.com/users/drbh/followers",
"following_url": "https://api.github.com/users/drbh/following{/other_user}",
"gists_url": "https://api.github.com/users/drbh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/drbh",
"id": 9896130,
"login": "drbh",
"node_id": "MDQ6VXNlcjk4OTYxMzA=",
"organizations_url": "https://api.github.com/users/drbh/orgs",
"received_events_url": "https://api.github.com/users/drbh/received_events",
"repos_url": "https://api.github.com/users/drbh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/drbh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drbh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/drbh",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 2
|
2025-09-24 20:19:44+00:00
|
2025-09-26 15:04:25+00:00
|
2025-09-26 15:04:23+00:00
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7787.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7787",
"merged_at": "2025-09-26T15:04:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7787.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7787"
}
|
## perf: reduce copies in TorchFormatter
This PR make changes the torch formatter to avoid unnecessary copies and casts when converting decoded batches to tensors.
Because many arrays are already in a torch-friendly memory layout and dtype, we can do zero‑copy conversions (`torch.from_numpy`) and only fall back to `as_tensor` when a dtype/device change is required. We also consolidate lists of same‑shape tensors with a cheap `stack` only when safe.
Why it helps
- Avoids extra materialization and dtype churn during batched map and indexing.
- Preserves API and outputs; only changes internal conversion logic.
Small benchmark script (based on https://github.com/huggingface/datasets/issues/6104)
```python
import time
from datasets import load_dataset
def main():
dataset = load_dataset("NightMachinery/hf_datasets_bug1")
dataset = dataset["train"] if "train" in dataset else dataset
t0 = time.time()
dataset.set_format(type="torch")
# identity map with small batches
dataset = dataset.map(lambda x: x, batched=True, batch_size=20)
# force materialization
data = dataset[:300]
print(len(data.keys()))
t1 = time.time()
print(f"Duration: {t1 - t0:.2f} s")
if __name__ == "__main__":
main()
```
Without changes
```bash
uv run bench.py
```
```bash
# 303
# Duration: 7.26 s
```
With changes
```bash
uv run bench.py
```
```bash
# 303
# Duration: 4.43 s
```
# Updated reproduction scripts
Below are some simple test cases using `main` and this `refactor-torch-formatter` branch. I've included the two scripts and output when running on a local machine.
```python
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "torch",
# "datasets",
# "pillow",
# ]
#
# [tool.uv.sources]
# datasets = { git = "https://github.com/huggingface/datasets.git" }
# ///
import time
import random
import numpy as np
from PIL import Image
from datasets import Dataset, load_dataset
import torch
def create_mock_images_dataset(num_samples=5000):
"""Create a deterministic mock dataset with PIL images."""
random.seed(42)
np.random.seed(42)
images = []
labels = []
for i in range(num_samples):
# Create deterministic RGB image
width, height = 64, 64
rgb_array = np.random.randint(0, 256, (height, width, 3), dtype=np.uint8)
image = Image.fromarray(rgb_array)
images.append(image)
labels.append(i % 10) # 10 classes
return Dataset.from_dict({"image": images, "label": labels})
def create_mock_text_dataset(num_samples=5000):
"""Create a deterministic mock dataset with text."""
random.seed(42)
words = ["apple", "banana", "cherry", "date", "elderberry", "fig", "grape", "honeydew"]
texts = []
labels = []
for i in range(num_samples):
# Create deterministic text
text_length = 5 + (i % 20) # 5-24 words
text = " ".join(random.choices(words, k=text_length))
texts.append(text)
labels.append(i % 3) # 3 classes
return Dataset.from_dict({"text": texts, "label": labels})
def create_mock_ints_dataset(num_samples=5000):
"""Create a deterministic mock dataset with integers."""
random.seed(42)
data = []
labels = []
for i in range(num_samples):
# Create deterministic integer arrays
arr = [random.randint(0, 1000) for _ in range(50)] # 50 integers each
data.append(arr)
labels.append(i % 5) # 5 classes
return Dataset.from_dict({"data": data, "label": labels})
def create_mock_floats_dataset(num_samples=5000):
"""Create a deterministic mock dataset with floats."""
random.seed(42)
data = []
labels = []
for i in range(num_samples):
# Create deterministic float arrays
arr = [random.uniform(0.0, 100.0) for _ in range(30)] # 30 floats each
data.append(arr)
labels.append(i % 4) # 4 classes
return Dataset.from_dict({"data": data, "label": labels})
def benchmark_dataset(name, dataset, num_samples=1000):
"""Benchmark dataset access speed."""
print(f"\n=== {name} Dataset Benchmark ===")
t0 = time.time()
dataset.set_format(type="torch")
# identity map with small batches
dataset = dataset.map(lambda x: x, batched=True, batch_size=20)
# force materialization
data = dataset[:num_samples]
print(f"Keys: {list(data.keys())}")
print(f"Sample count: {len(data[list(data.keys())[0]])}")
t1 = time.time()
print(f"Duration: {t1 - t0:.2f} s")
print(f"Speed: {num_samples / (t1 - t0):.1f} samples/s")
def main():
# PIL Images benchmark
images_dataset = create_mock_images_dataset()
benchmark_dataset("PIL Images", images_dataset)
# Text benchmark
text_dataset = create_mock_text_dataset()
benchmark_dataset("Text", text_dataset)
# Integers benchmark
ints_dataset = create_mock_ints_dataset()
benchmark_dataset("Integers", ints_dataset)
# Floats benchmark
floats_dataset = create_mock_floats_dataset()
benchmark_dataset("Floats", floats_dataset)
if __name__ == "__main__":
main()
```
output
```bash
uv run --refresh example1.py
```
```text
=== PIL Images Dataset Benchmark ===
Map: 0%| | 0/5000 [00:00<?, ? examples/s]/Users/drbh/.cache/uv/environments-v2/example1-2aca1a30e84bdead/lib/python3.10/site-packages/datasets/features/image.py:352: UserWarning: Downcasting array dtype int64 to uint8 to be compatible with 'Pillow'
warnings.warn(f"Downcasting array dtype {dtype} to {dest_dtype} to be compatible with 'Pillow'")
Map: 100%|█████████████████████████████████████████████| 5000/5000 [00:01<00:00, 3669.15 examples/s]
Keys: ['image', 'label']
Sample count: 1000
Duration: 2.14 s
Speed: 466.5 samples/s
=== Text Dataset Benchmark ===
Map: 100%|███████████████████████████████████████████| 5000/5000 [00:00<00:00, 141327.04 examples/s]
Keys: ['text', 'label']
Sample count: 1000
Duration: 0.04 s
Speed: 27004.3 samples/s
=== Integers Dataset Benchmark ===
Map: 100%|███████████████████████████████████████████| 5000/5000 [00:00<00:00, 112904.90 examples/s]
Keys: ['data', 'label']
Sample count: 1000
Duration: 0.05 s
Speed: 21680.6 samples/s
=== Floats Dataset Benchmark ===
Map: 100%|███████████████████████████████████████████| 5000/5000 [00:00<00:00, 104084.25 examples/s]
Keys: ['data', 'label']
Sample count: 1000
Duration: 0.05 s
Speed: 20215.1 samples/s
```
and this branch specifically
```python
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "torch",
# "datasets",
# "pillow",
# ]
#
# [tool.uv.sources]
# datasets = { git = "https://github.com/huggingface/datasets.git", rev = "refactor-torch-formatter" }
# ///
import time
import random
import numpy as np
from PIL import Image
from datasets import Dataset, load_dataset
import torch
def create_mock_images_dataset(num_samples=5000):
"""Create a deterministic mock dataset with PIL images."""
random.seed(42)
np.random.seed(42)
images = []
labels = []
for i in range(num_samples):
# Create deterministic RGB image
width, height = 64, 64
rgb_array = np.random.randint(0, 256, (height, width, 3), dtype=np.uint8)
image = Image.fromarray(rgb_array)
images.append(image)
labels.append(i % 10) # 10 classes
return Dataset.from_dict({"image": images, "label": labels})
def create_mock_text_dataset(num_samples=5000):
"""Create a deterministic mock dataset with text."""
random.seed(42)
words = [
"apple",
"banana",
"cherry",
"date",
"elderberry",
"fig",
"grape",
"honeydew",
]
texts = []
labels = []
for i in range(num_samples):
# Create deterministic text
text_length = 5 + (i % 20) # 5-24 words
text = " ".join(random.choices(words, k=text_length))
texts.append(text)
labels.append(i % 3) # 3 classes
return Dataset.from_dict({"text": texts, "label": labels})
def create_mock_ints_dataset(num_samples=5000):
"""Create a deterministic mock dataset with integers."""
random.seed(42)
data = []
labels = []
for i in range(num_samples):
# Create deterministic integer arrays
arr = [random.randint(0, 1000) for _ in range(50)] # 50 integers each
data.append(arr)
labels.append(i % 5) # 5 classes
return Dataset.from_dict({"data": data, "label": labels})
def create_mock_floats_dataset(num_samples=5000):
"""Create a deterministic mock dataset with floats."""
random.seed(42)
data = []
labels = []
for i in range(num_samples):
# Create deterministic float arrays
arr = [random.uniform(0.0, 100.0) for _ in range(30)] # 30 floats each
data.append(arr)
labels.append(i % 4) # 4 classes
return Dataset.from_dict({"data": data, "label": labels})
def benchmark_dataset(name, dataset, num_samples=1000):
"""Benchmark dataset access speed."""
print(f"\n=== {name} Dataset Benchmark ===")
t0 = time.time()
dataset.set_format(type="torch")
# identity map with small batches
dataset = dataset.map(lambda x: x, batched=True, batch_size=20)
# force materialization
data = dataset[:num_samples]
print(f"Keys: {list(data.keys())}")
print(f"Sample count: {len(data[list(data.keys())[0]])}")
t1 = time.time()
print(f"Duration: {t1 - t0:.2f} s")
print(f"Speed: {num_samples / (t1 - t0):.1f} samples/s")
def main():
# PIL Images benchmark
images_dataset = create_mock_images_dataset()
benchmark_dataset("PIL Images", images_dataset)
# Text benchmark
text_dataset = create_mock_text_dataset()
benchmark_dataset("Text", text_dataset)
# Integers benchmark
ints_dataset = create_mock_ints_dataset()
benchmark_dataset("Integers", ints_dataset)
# Floats benchmark
floats_dataset = create_mock_floats_dataset()
benchmark_dataset("Floats", floats_dataset)
if __name__ == "__main__":
main()
```
```bash
uv run --refresh example2.py
```
```text
Updated https://github.com/huggingface/datasets.git (2cb64d1b6503afb49d822b20979760efe4519d03)
Built datasets @ git+https://github.com/huggingface/datasets.git@2cb64d1b6503afb49d822b20979760efe
Uninstalled 1 package in 20ms
Installed 1 package in 5ms
=== PIL Images Dataset Benchmark ===
Map: 0%| | 0/5000 [00:00<?, ? examples/s]/Users/drbh/.cache/uv/environments-v2/example2-d4af608668b706ec/lib/python3.10/site-packages/datasets/features/image.py:352: UserWarning: Downcasting array dtype int64 to uint8 to be compatible with 'Pillow'
warnings.warn(f"Downcasting array dtype {dtype} to {dest_dtype} to be compatible with 'Pillow'")
Map: 100%|█████████████████████████████████████████████| 5000/5000 [00:01<00:00, 3645.14 examples/s]
Keys: ['image', 'label']
Sample count: 1000
Duration: 2.04 s
Speed: 491.2 samples/s
=== Text Dataset Benchmark ===
Map: 100%|████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 169877.28 examples/s]
Keys: ['text', 'label']
Sample count: 1000
Duration: 0.03 s
Speed: 32236.1 samples/s
=== Integers Dataset Benchmark ===
Map: 100%|████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 131940.33 examples/s]
Keys: ['data', 'label']
Sample count: 1000
Duration: 0.04 s
Speed: 25493.3 samples/s
=== Floats Dataset Benchmark ===
Map: 100%|████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 120621.64 examples/s]
Keys: ['data', 'label']
Sample count: 1000
Duration: 0.04 s
Speed: 23370.6 samples/s
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7787/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7787/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7786
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7786/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7786/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7786/events
|
https://github.com/huggingface/datasets/pull/7786
| 3,448,506,148
|
PR_kwDODunzps6qPTgs
| 7,786
|
Sample without replacement option when interleaving datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26553095?v=4",
"events_url": "https://api.github.com/users/radulescupetru/events{/privacy}",
"followers_url": "https://api.github.com/users/radulescupetru/followers",
"following_url": "https://api.github.com/users/radulescupetru/following{/other_user}",
"gists_url": "https://api.github.com/users/radulescupetru/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/radulescupetru",
"id": 26553095,
"login": "radulescupetru",
"node_id": "MDQ6VXNlcjI2NTUzMDk1",
"organizations_url": "https://api.github.com/users/radulescupetru/orgs",
"received_events_url": "https://api.github.com/users/radulescupetru/received_events",
"repos_url": "https://api.github.com/users/radulescupetru/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/radulescupetru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/radulescupetru/subscriptions",
"type": "User",
"url": "https://api.github.com/users/radulescupetru",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 8
|
2025-09-24 09:18:14+00:00
|
2025-10-07 14:50:16+00:00
|
2025-10-07 14:50:16+00:00
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7786.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7786",
"merged_at": "2025-10-07T14:50:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7786.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7786"
}
|
Right now, `interleave_datasets` function with probabilities will sample with replacement. The PR adds the ability to sample without replacement.
```
import datasets
# Create datasets of different sizes to test exhaustion
data_a = [{"value": i, "source": "A"} for i in range(5)]
data_b = [{"value": i, "source": "B"} for i in range(10, 15)]
ds_a = datasets.Dataset.from_list(data_a).to_iterable_dataset()
ds_b = datasets.Dataset.from_list(data_b).to_iterable_dataset()
# Interleave with probabilities
ds_interleaved = datasets.interleave_datasets(
[ds_a, ds_b],
probabilities=[0.6, 0.4],
seed=42,
stopping_strategy="all_exhausted",
sample_with_replacement=True,
)
for i, example in enumerate(ds_interleaved):
print(f"Sample:{i}: value:{example['value']:02d} source:{example['source']}")
```
In this example, `sample_with_replacement=True` and it prints:
```
Sample:0: value:10 source:B
Sample:1: value:00 source:A
Sample:2: value:11 source:B
Sample:3: value:12 source:B
Sample:4: value:01 source:A
Sample:5: value:13 source:B
Sample:6: value:14 source:B
Sample:7: value:10 source:B
Sample:8: value:02 source:A
Sample:9: value:03 source:A
Sample:10: value:04 source:A
```
Note that sample with value:10 source: B is sampled twice (Sample:0 and Sample:7)
Re-running with `sample_with_replacement=False` in prints:
```
Sample:0: value:10 source:B
Sample:1: value:00 source:A
Sample:2: value:11 source:B
Sample:3: value:12 source:B
Sample:4: value:01 source:A
Sample:5: value:13 source:B
Sample:6: value:14 source:B
Sample:7: value:02 source:A
Sample:8: value:03 source:A
Sample:9: value:04 source:A
```
Note that we don't see any repeated items.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7786/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7786/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7785
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7785/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7785/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7785/events
|
https://github.com/huggingface/datasets/pull/7785
| 3,439,897,018
|
PR_kwDODunzps6pyTM_
| 7,785
|
Fix Audio docstring by removing unsupported mono argument
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/84439872?v=4",
"events_url": "https://api.github.com/users/tanuj-rai/events{/privacy}",
"followers_url": "https://api.github.com/users/tanuj-rai/followers",
"following_url": "https://api.github.com/users/tanuj-rai/following{/other_user}",
"gists_url": "https://api.github.com/users/tanuj-rai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tanuj-rai",
"id": 84439872,
"login": "tanuj-rai",
"node_id": "MDQ6VXNlcjg0NDM5ODcy",
"organizations_url": "https://api.github.com/users/tanuj-rai/orgs",
"received_events_url": "https://api.github.com/users/tanuj-rai/received_events",
"repos_url": "https://api.github.com/users/tanuj-rai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tanuj-rai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanuj-rai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tanuj-rai",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null | 2
|
2025-09-22 09:06:52+00:00
|
2025-09-23 09:57:37+00:00
|
NaT
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7785.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7785",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7785.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7785"
}
|
This PR fixes issue #7745.
Who can review:
@lhoestq
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7785/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7785/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7783
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7783/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7783/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7783/events
|
https://github.com/huggingface/datasets/pull/7783
| 3,430,715,779
|
PR_kwDODunzps6pT7pg
| 7,783
|
Support huggingface_hub v0.x and v1.x
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 2
|
2025-09-18 14:45:20+00:00
|
2025-10-01 13:56:05+00:00
|
2025-10-01 13:56:03+00:00
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7783.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7783",
"merged_at": "2025-10-01T13:56:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7783.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7783"
}
|
Related to https://github.com/huggingface/huggingface_hub/issues/3340.
This PR adapts `datasets` to be compatible with both huggingface_hub v0.x and v1.x.
In practice nothing else should change (I've checked the codebase). The `HfHubHTTPError` is a base error defined in `huggingface_hub` that inherits from `requests.HTTPError` in v0.x and will inherit from `httpx.HTTPError` in v1.x. It has been introduced ~2 years ago so it's fine to use it right now (i.e. no need to wait for v1.x release or bump minimal version).
Most of the changes have been around the test suite to make sure that tests are passing with both `requests` and `httpx` backends. Mid-term it would be good to completely remove the `requests` dependency from `datasets` but that's an orthogonal topic.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7783/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7783/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7782
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7782/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7782/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7782/events
|
https://github.com/huggingface/datasets/pull/7782
| 3,430,341,875
|
PR_kwDODunzps6pSozj
| 7,782
|
set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-09-18 13:15:56+00:00
|
2025-09-18 13:20:03+00:00
|
2025-09-18 13:16:04+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7782.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7782",
"merged_at": "2025-09-18T13:16:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7782.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7782"
}
| null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7782/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7782/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7781
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7781/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7781/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7781/events
|
https://github.com/huggingface/datasets/pull/7781
| 3,430,332,841
|
PR_kwDODunzps6pSm0C
| 7,781
|
release: 4.1.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null | 1
|
2025-09-18 13:13:47+00:00
|
2025-09-18 13:16:48+00:00
|
2025-09-18 13:14:47+00:00
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7781.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7781",
"merged_at": "2025-09-18T13:14:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7781.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7781"
}
| null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7781/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7781/timeline
| null | null | null | null | true
|
End of preview. Expand
in Data Studio
Dataset Card for GitHub Issues without Comments
Dataset Summary
The GitHub Issues dataset contains issues and pull requests from the 🤗 Datasets repository ,but it does not include the comments.It supports tasks like Text classification and text retrieval. Each entry is an English-language discussion centered around NLP, computer vision, and other machine learning datasets.
Supported Tasks and Leaderboards
[More Information Needed.]
Dataset Structure
[More Information Needed.]
Data Instances
[More Information Needed.]
Data Fields
[More Information Needed.]
- Downloads last month
- 33