url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
3.53B
node_id
stringlengths
18
32
number
int64
1
7.82k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
int64
0
70
created_at
stringdate
2020-04-14 10:18:02
2025-10-20 06:38:19
updated_at
stringdate
2020-04-27 16:04:17
2025-10-20 06:41:20
closed_at
stringlengths
3
25
author_association
stringclasses
4 values
type
float64
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
closed_by
dict
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
4 values
sub_issues_summary
dict
issue_dependencies_summary
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/345/comments
https://api.github.com/repos/huggingface/datasets/issues/345/events
https://github.com/huggingface/datasets/issues/345
651,761,201
MDU6SXNzdWU2NTE3NjEyMDE=
345
Supporting documents in ELI5
{ "avatar_url": "https://avatars.githubusercontent.com/u/29262273?v=4", "events_url": "https://api.github.com/users/saverymax/events{/privacy}", "followers_url": "https://api.github.com/users/saverymax/followers", "following_url": "https://api.github.com/users/saverymax/following{/other_user}", "gists_url": "https://api.github.com/users/saverymax/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/saverymax", "id": 29262273, "login": "saverymax", "node_id": "MDQ6VXNlcjI5MjYyMjcz", "organizations_url": "https://api.github.com/users/saverymax/orgs", "received_events_url": "https://api.github.com/users/saverymax/received_events", "repos_url": "https://api.github.com/users/saverymax/repos", "site_admin": false, "starred_url": "https://api.github.com/users/saverymax/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saverymax/subscriptions", "type": "User", "url": "https://api.github.com/users/saverymax", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2020-07-06 19:14:13+00:00
2020-10-27 15:38:45+00:00
2020-10-27 15:38:45+00:00
NONE
null
null
null
null
I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to facebook, the entire document collection is quite large. However, it would still be helpful to at least include a subset of the supporting documents i.e., having some data is better than having a block of cheese, in my case at least. If you choose not to include them, it would be helpful to have documentation mentioning this specifically. It is especially confusing because the hf nlp ELI5 dataset has the key `'document'` but there are no documents to be found :(
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/345/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/345/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/344
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/344/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/344/comments
https://api.github.com/repos/huggingface/datasets/issues/344/events
https://github.com/huggingface/datasets/pull/344
651,495,246
MDExOlB1bGxSZXF1ZXN0NDQ0NzQwMTIw
344
Search qa
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-07-06 12:23:16+00:00
2020-07-16 08:58:16+00:00
2020-07-16 08:58:16+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/344.diff", "html_url": "https://github.com/huggingface/datasets/pull/344", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/344.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/344" }
This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name: - raw_jeopardy: raw data - train_test_val: which is the splitted version #336
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/344/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/344/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/343/comments
https://api.github.com/repos/huggingface/datasets/issues/343/events
https://github.com/huggingface/datasets/pull/343
651,419,630
MDExOlB1bGxSZXF1ZXN0NDQ0Njc4NDEw
343
Fix nested tensorflow format
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-07-06 10:13:45+00:00
2020-07-06 13:11:52+00:00
2020-07-06 13:11:51+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/343.diff", "html_url": "https://github.com/huggingface/datasets/pull/343", "merged_at": "2020-07-06T13:11:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/343.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/343" }
In #339 and #337 we are thinking about adding a way to export datasets to tfrecords. However I noticed that it was not possible to do `dset.set_format("tensorflow")` on datasets with nested features like `squad`. I fixed that using a nested map operations to convert features to `tf.ragged.constant`. I also added tests on the `set_format` function.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/343/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/343/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/342
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/342/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/342/comments
https://api.github.com/repos/huggingface/datasets/issues/342/events
https://github.com/huggingface/datasets/issues/342
651,333,194
MDU6SXNzdWU2NTEzMzMxOTQ=
342
Features should be updated when `map()` changes schema
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-07-06 08:03:23+00:00
2020-07-23 10:15:16+00:00
2020-07-23 10:15:16+00:00
MEMBER
null
null
null
null
`dataset.map()` can change the schema and column names. We should update the features in this case (with what is possible to infer).
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/342/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/342/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/341
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/341/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/341/comments
https://api.github.com/repos/huggingface/datasets/issues/341/events
https://github.com/huggingface/datasets/pull/341
650,611,969
MDExOlB1bGxSZXF1ZXN0NDQ0MDcwMjEx
341
add fever dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-07-03 13:53:07+00:00
2020-07-06 13:03:48+00:00
2020-07-06 13:03:47+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/341.diff", "html_url": "https://github.com/huggingface/datasets/pull/341", "merged_at": "2020-07-06T13:03:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/341.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/341" }
This PR add the FEVER dataset https://fever.ai/ used in with the paper: FEVER: a large-scale dataset for Fact Extraction and VERification (https://arxiv.org/pdf/1803.05355.pdf). #336
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/341/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/341/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/340
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/340/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/340/comments
https://api.github.com/repos/huggingface/datasets/issues/340/events
https://github.com/huggingface/datasets/pull/340
650,533,920
MDExOlB1bGxSZXF1ZXN0NDQ0MDA2Nzcy
340
Update cfq.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/4437290?v=4", "events_url": "https://api.github.com/users/brainshawn/events{/privacy}", "followers_url": "https://api.github.com/users/brainshawn/followers", "following_url": "https://api.github.com/users/brainshawn/following{/other_user}", "gists_url": "https://api.github.com/users/brainshawn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/brainshawn", "id": 4437290, "login": "brainshawn", "node_id": "MDQ6VXNlcjQ0MzcyOTA=", "organizations_url": "https://api.github.com/users/brainshawn/orgs", "received_events_url": "https://api.github.com/users/brainshawn/received_events", "repos_url": "https://api.github.com/users/brainshawn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/brainshawn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brainshawn/subscriptions", "type": "User", "url": "https://api.github.com/users/brainshawn", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-07-03 11:23:19+00:00
2020-07-03 12:33:50+00:00
2020-07-03 12:33:50+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/340.diff", "html_url": "https://github.com/huggingface/datasets/pull/340", "merged_at": "2020-07-03T12:33:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/340.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/340" }
Make the dataset name consistent with in the paper: Compositional Freebase Question => Compositional Freebase Questions.
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/340/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/340/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/339
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/339/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/339/comments
https://api.github.com/repos/huggingface/datasets/issues/339/events
https://github.com/huggingface/datasets/pull/339
650,156,468
MDExOlB1bGxSZXF1ZXN0NDQzNzAyNTcw
339
Add dataset.export() to TFRecords
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen", "user_view_type": "public" }
[]
closed
false
null
[]
null
18
2020-07-02 19:26:27+00:00
2020-07-22 09:16:12+00:00
2020-07-22 09:16:12+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/339.diff", "html_url": "https://github.com/huggingface/datasets/pull/339", "merged_at": "2020-07-22T09:16:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/339.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/339" }
Fixes https://github.com/huggingface/nlp/issues/337 Some design decisions: - Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting. - Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193. - Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know. - There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know. Also, I noticed that ```python dataset = dataset.select(indices) dataset.set_format("tensorflow") # dataset._format_type is "tensorflow" ``` gives a different output than ```python dataset.set_format("tensorflow") dataset = dataset.select(indices) # dataset._format_type is None ``` The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 3, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/339/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/339/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/338
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/338/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/338/comments
https://api.github.com/repos/huggingface/datasets/issues/338/events
https://github.com/huggingface/datasets/pull/338
650,057,253
MDExOlB1bGxSZXF1ZXN0NDQzNjIxMTEx
338
Run `make style`
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-07-02 16:19:47+00:00
2020-07-02 18:03:10+00:00
2020-07-02 18:03:10+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/338.diff", "html_url": "https://github.com/huggingface/datasets/pull/338", "merged_at": "2020-07-02T18:03:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/338.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/338" }
These files get changed when I run `make style` on an unrelated PR. Upstreaming these changes so development on a different branch can be easier.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/338/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/338/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/337/comments
https://api.github.com/repos/huggingface/datasets/issues/337/events
https://github.com/huggingface/datasets/issues/337
650,035,887
MDU6SXNzdWU2NTAwMzU4ODc=
337
[Feature request] Export Arrow dataset to TFRecords
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-07-02 15:47:12+00:00
2020-07-22 09:16:12+00:00
2020-07-22 09:16:12+00:00
CONTRIBUTOR
null
null
null
null
The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API: ```python # use these existing methods ds = load_dataset("wikitext", "wikitext-2-raw-v1", split="train") ds = ds.map(lambda ex: tokenizer(ex)) ds.set_format("tensorflow", columns=["input_ids", "token_type_ids", "attention_mask"]) # then add this method ds.export(folder="/my/tfrecords", prefix="myrecord", num_shards=8, format="tfrecord") ``` which would create files like so: ```bash /my/tfrecords/myrecord_1.tfrecord /my/tfrecords/myrecord_2.tfrecord ... ``` I would be happy to contribute this method. We could use a similar approach for PyTorch. Thoughts?
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/337/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/337/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/336
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/336/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/336/comments
https://api.github.com/repos/huggingface/datasets/issues/336/events
https://github.com/huggingface/datasets/issues/336
649,914,203
MDU6SXNzdWU2NDk5MTQyMDM=
336
[Dataset requests] New datasets for Open Question Answering
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf", "user_view_type": "public" }
[ { "color": "008672", "default": true, "description": "Extra attention is needed", "id": 1935892884, "name": "help wanted", "node_id": "MDU6TGFiZWwxOTM1ODkyODg0", "url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted" }, { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" } ]
null
0
2020-07-02 13:03:03+00:00
2020-07-16 09:04:22+00:00
2020-07-16 09:04:22+00:00
MEMBER
null
null
null
null
We are still a few datasets missing for Open-Question Answering which is currently a field in strong development. Namely, it would be really nice to add: - WebQuestions (Berant et al., 2013) [done] - CuratedTrec (Baudis et al. 2015) [not open-source] - MS-MARCO (NGuyen et al. 2016) [done] - SearchQA (Dunn et al. 2017) [done] - FEVER (Thorne et al. 2018) - [ done] All these datasets are cited in http://arxiv.org/abs/2005.11401
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/336/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/336/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/335
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/335/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/335/comments
https://api.github.com/repos/huggingface/datasets/issues/335/events
https://github.com/huggingface/datasets/pull/335
649,765,179
MDExOlB1bGxSZXF1ZXN0NDQzMzgwMjI1
335
BioMRC Dataset presented in BioNLP 2020 ACL Workshop
{ "avatar_url": "https://avatars.githubusercontent.com/u/15162021?v=4", "events_url": "https://api.github.com/users/PetrosStav/events{/privacy}", "followers_url": "https://api.github.com/users/PetrosStav/followers", "following_url": "https://api.github.com/users/PetrosStav/following{/other_user}", "gists_url": "https://api.github.com/users/PetrosStav/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PetrosStav", "id": 15162021, "login": "PetrosStav", "node_id": "MDQ6VXNlcjE1MTYyMDIx", "organizations_url": "https://api.github.com/users/PetrosStav/orgs", "received_events_url": "https://api.github.com/users/PetrosStav/received_events", "repos_url": "https://api.github.com/users/PetrosStav/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PetrosStav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PetrosStav/subscriptions", "type": "User", "url": "https://api.github.com/users/PetrosStav", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2020-07-02 09:03:41+00:00
2020-07-15 08:02:07+00:00
2020-07-15 08:02:07+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/335.diff", "html_url": "https://github.com/huggingface/datasets/pull/335", "merged_at": "2020-07-15T08:02:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/335.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/335" }
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/335/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/335/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/334
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/334/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/334/comments
https://api.github.com/repos/huggingface/datasets/issues/334/events
https://github.com/huggingface/datasets/pull/334
649,661,791
MDExOlB1bGxSZXF1ZXN0NDQzMjk1NjQ0
334
Add dataset.shard() method
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-07-02 06:05:19+00:00
2020-07-06 12:35:36+00:00
2020-07-06 12:35:36+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/334.diff", "html_url": "https://github.com/huggingface/datasets/pull/334", "merged_at": "2020-07-06T12:35:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/334.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/334" }
Fixes https://github.com/huggingface/nlp/issues/312
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/334/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/334/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/333/comments
https://api.github.com/repos/huggingface/datasets/issues/333/events
https://github.com/huggingface/datasets/pull/333
649,236,516
MDExOlB1bGxSZXF1ZXN0NDQyOTE1NDQ0
333
fix variable name typo
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2020-07-01 19:13:50+00:00
2020-07-24 15:43:31+00:00
2020-07-24 08:32:16+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/333.diff", "html_url": "https://github.com/huggingface/datasets/pull/333", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/333.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/333" }
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/333/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/333/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/332
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/332/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/332/comments
https://api.github.com/repos/huggingface/datasets/issues/332/events
https://github.com/huggingface/datasets/pull/332
649,140,135
MDExOlB1bGxSZXF1ZXN0NDQyODMwMzMz
332
Add wiki_dpr
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2020-07-01 17:12:00+00:00
2020-07-06 12:21:17+00:00
2020-07-06 12:21:16+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/332.diff", "html_url": "https://github.com/huggingface/datasets/pull/332", "merged_at": "2020-07-06T12:21:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/332.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/332" }
Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder. Note on the implementation: - There are two configs: with and without the embeddings (73GB vs 14GB) - I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing) - I added the case for lists of urls as input of the download_manager
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/332/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/332/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/331
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/331/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/331/comments
https://api.github.com/repos/huggingface/datasets/issues/331/events
https://github.com/huggingface/datasets/issues/331
648,533,199
MDU6SXNzdWU2NDg1MzMxOTk=
331
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
5
2020-06-30 22:21:33+00:00
2020-07-09 13:03:40+00:00
2020-07-09 13:03:40+00:00
CONTRIBUTOR
null
null
null
null
``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py", line 520, in load_dataset builder_instance.download_and_prepare( File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 431, in download_and_prepare self._download_and_prepare( File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 488, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}] ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/331/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/331/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/330
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/330/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/330/comments
https://api.github.com/repos/huggingface/datasets/issues/330/events
https://github.com/huggingface/datasets/pull/330
648,525,720
MDExOlB1bGxSZXF1ZXN0NDQyMzIxMjEw
330
Doc red
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-30 22:05:31+00:00
2020-07-06 12:10:39+00:00
2020-07-05 12:27:29+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/330.diff", "html_url": "https://github.com/huggingface/datasets/pull/330", "merged_at": "2020-07-05T12:27:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/330.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/330" }
Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes: - There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to reflect this. - As well as the relation id, the full relation name is mapped from `rel_info.json` - I renamed the 'h', 'r', 't' keys to 'head', 'relation' and 'tail' to make them more readable. - Used the fix from #319 to allow nested sequences of dicts.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/330/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/330/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/329
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/329/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/329/comments
https://api.github.com/repos/huggingface/datasets/issues/329/events
https://github.com/huggingface/datasets/issues/329
648,446,979
MDU6SXNzdWU2NDg0NDY5Nzk=
329
[Bug] FileLock dependency incompatible with filesystem
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen", "user_view_type": "public" }
[]
closed
false
null
[]
null
11
2020-06-30 19:45:31+00:00
2024-12-26 15:13:39+00:00
2020-06-30 21:33:06+00:00
CONTRIBUTOR
null
null
null
null
I'm downloading a dataset successfully with `load_dataset("wikitext", "wikitext-2-raw-v1")` But when I attempt to cache it on an external volume, it hangs indefinitely: `load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount` The filesystem when hanging looks like this: ```bash /fsx ----downloads ----94be...73.lock ----wikitext ----wikitext-2-raw ----wikitext-2-raw-1.0.0.incomplete ``` It appears that on this filesystem, the FileLock object is forever stuck in its "acquire" stage. I have verified that the issue lies specifically with the `filelock` dependency: ```python open("/fsx/hello.txt").write("hello") # succeeds from filelock import FileLock with FileLock("/fsx/hello.lock"): open("/fsx/hello.txt").write("hello") # hangs indefinitely ``` Has anyone else run into this issue? I'd raise it directly on the FileLock repo, but that project appears abandoned with the last update over a year ago. Or if there's a solution that would remove the FileLock dependency from the project, I would appreciate that.
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/329/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/329/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/328/comments
https://api.github.com/repos/huggingface/datasets/issues/328/events
https://github.com/huggingface/datasets/issues/328
648,326,841
MDU6SXNzdWU2NDgzMjY4NDE=
328
Fork dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timothyjlaurent", "id": 2000204, "login": "timothyjlaurent", "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "type": "User", "url": "https://api.github.com/users/timothyjlaurent", "user_view_type": "public" }
[]
closed
false
null
[]
null
5
2020-06-30 16:42:53+00:00
2020-07-06 21:43:59+00:00
2020-07-06 21:43:59+00:00
NONE
null
null
null
null
We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow parses raw text and json with Entity and Relations annotations and creates 2 datasets for training a NER and Relations prediction heads. Is there some good way to "fork" dataset- EG 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 -> DatasetREL or 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 + DatasetNER -> DatasetREL
{ "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timothyjlaurent", "id": 2000204, "login": "timothyjlaurent", "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "type": "User", "url": "https://api.github.com/users/timothyjlaurent", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/328/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/328/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/327/comments
https://api.github.com/repos/huggingface/datasets/issues/327/events
https://github.com/huggingface/datasets/pull/327
648,312,858
MDExOlB1bGxSZXF1ZXN0NDQyMTQyOTQw
327
set seed for suffling tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-30 16:21:34+00:00
2020-07-02 08:34:05+00:00
2020-07-02 08:34:04+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/327.diff", "html_url": "https://github.com/huggingface/datasets/pull/327", "merged_at": "2020-07-02T08:34:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/327.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/327" }
Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/327/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/327/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/326
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/326/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/326/comments
https://api.github.com/repos/huggingface/datasets/issues/326/events
https://github.com/huggingface/datasets/issues/326
648,126,103
MDU6SXNzdWU2NDgxMjYxMDM=
326
Large dataset in Squad2-format
{ "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/flozi00", "id": 47894090, "login": "flozi00", "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "organizations_url": "https://api.github.com/users/flozi00/orgs", "received_events_url": "https://api.github.com/users/flozi00/received_events", "repos_url": "https://api.github.com/users/flozi00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "type": "User", "url": "https://api.github.com/users/flozi00", "user_view_type": "public" }
[]
closed
false
null
[]
null
8
2020-06-30 12:18:59+00:00
2020-07-09 09:01:50+00:00
2020-07-09 09:01:50+00:00
CONTRIBUTOR
null
null
null
null
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677.732 - Answers: 6.742.406 - unanswerable: 377.398 It is already cleaned <pre><code> train_data = [ { 'context': "this is the context", 'qas': [ { 'id': "00002", 'is_impossible': False, 'question': "whats is this", 'answers': [ { 'text': "answer", 'answer_start': 0 } ] }, { 'id': "00003", 'is_impossible': False, 'question': "question2", 'answers': [ { 'text': "answer2", 'answer_start': 1 } ] } ] } ] </code></pre> Cause it is growing every day we are thinking about an structure like this: We host an Json file, containing all the download links and the script can load it dynamically. At the moment it is around ~20GB Any advice how to handle this, or an ready to use template ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/326/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/326/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/325/comments
https://api.github.com/repos/huggingface/datasets/issues/325/events
https://github.com/huggingface/datasets/pull/325
647,601,592
MDExOlB1bGxSZXF1ZXN0NDQxNTk3NTgw
325
Add SQuADShifts dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8953195?v=4", "events_url": "https://api.github.com/users/millerjohnp/events{/privacy}", "followers_url": "https://api.github.com/users/millerjohnp/followers", "following_url": "https://api.github.com/users/millerjohnp/following{/other_user}", "gists_url": "https://api.github.com/users/millerjohnp/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/millerjohnp", "id": 8953195, "login": "millerjohnp", "node_id": "MDQ6VXNlcjg5NTMxOTU=", "organizations_url": "https://api.github.com/users/millerjohnp/orgs", "received_events_url": "https://api.github.com/users/millerjohnp/received_events", "repos_url": "https://api.github.com/users/millerjohnp/repos", "site_admin": false, "starred_url": "https://api.github.com/users/millerjohnp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/millerjohnp/subscriptions", "type": "User", "url": "https://api.github.com/users/millerjohnp", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-29 19:11:16+00:00
2020-06-30 17:07:31+00:00
2020-06-30 17:07:31+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/325.diff", "html_url": "https://github.com/huggingface/datasets/pull/325", "merged_at": "2020-06-30T17:07:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/325.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/325" }
This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https://arxiv.org/abs/2004.14444) to facilitate evaluating model robustness to distribution shift.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/325/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/325/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/324/comments
https://api.github.com/repos/huggingface/datasets/issues/324/events
https://github.com/huggingface/datasets/issues/324
647,525,725
MDU6SXNzdWU2NDc1MjU3MjU=
324
Error when calculating glue score
{ "avatar_url": "https://avatars.githubusercontent.com/u/47185867?v=4", "events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/events{/privacy}", "followers_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/followers", "following_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/following{/other_user}", "gists_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/D-i-l-r-u-k-s-h-i", "id": 47185867, "login": "D-i-l-r-u-k-s-h-i", "node_id": "MDQ6VXNlcjQ3MTg1ODY3", "organizations_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/orgs", "received_events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/received_events", "repos_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/repos", "site_admin": false, "starred_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/subscriptions", "type": "User", "url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i", "user_view_type": "public" }
[]
closed
false
null
[]
null
4
2020-06-29 16:53:48+00:00
2020-07-09 09:13:34+00:00
2020-07-09 09:13:34+00:00
NONE
null
null
null
null
I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` --------------------------------------------------------------------------- --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-b9210a524504> in <module>() ----> 1 glue_score = glue_metric.compute(predictions, references) 6 frames /usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs) 191 """ 192 if predictions is not None: --> 193 self.add_batch(predictions=predictions, references=references) 194 self.finalize(timeout=timeout) 195 /usr/local/lib/python3.6/dist-packages/nlp/metric.py in add_batch(self, predictions, references, **kwargs) 207 if self.writer is None: 208 self._init_writer() --> 209 self.writer.write_batch(batch) 210 211 def add(self, prediction=None, reference=None, **kwargs): /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 155 if self.pa_writer is None: 156 self._build_writer(pa_table=pa.Table.from_pydict(batch_examples)) --> 157 pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema) 158 if writer_batch_size is None: 159 writer_batch_size = self.writer_batch_size /usr/local/lib/python3.6/dist-packages/pyarrow/types.pxi in __iter__() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() TypeError: an integer is required (got type str) ``` I'm not sure whether I'm doing this wrong or whether it's an issue. I would like to know a workaround. Thank you.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/324/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/324/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/323/comments
https://api.github.com/repos/huggingface/datasets/issues/323/events
https://github.com/huggingface/datasets/pull/323
647,521,308
MDExOlB1bGxSZXF1ZXN0NDQxNTMxOTY3
323
Add package path to sys when downloading package as github archive
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2020-06-29 16:46:01+00:00
2020-07-30 14:00:23+00:00
2020-07-30 14:00:23+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/323.diff", "html_url": "https://github.com/huggingface/datasets/pull/323", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/323.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/323" }
This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh) @thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method. This PR fixes https://github.com/huggingface/nlp/issues/305
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/323/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/323/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/322/comments
https://api.github.com/repos/huggingface/datasets/issues/322/events
https://github.com/huggingface/datasets/pull/322
647,483,850
MDExOlB1bGxSZXF1ZXN0NDQxNTAyMjc2
322
output nested dict in get_nearest_examples
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-29 15:47:47+00:00
2020-07-02 08:33:33+00:00
2020-07-02 08:33:32+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/322.diff", "html_url": "https://github.com/huggingface/datasets/pull/322", "merged_at": "2020-07-02T08:33:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/322.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/322" }
As we are using a columnar format like arrow as the backend for datasets, we expect to have a dictionary of columns when we slice a dataset like in this example: ```python my_examples = dataset[0:10] print(type(my_examples)) # >>> dict print(my_examples["my_column"][0] # >>> this is the first element of the column 'my_column' ``` Therefore I wanted to keep this logic when calling `get_nearest_examples` that returns the top 10 nearest examples: ```python dataset.add_faiss_index(column="embeddings") scores, examples = dataset.get_nearest_examples("embeddings", query=my_numpy_embedding) print(type(examples)) # >>> dict ``` Previously it was returning a list[dict]. It was the only place that was using this output format. To make it work I had to implement `__getitem__(key)` where `key` is a list. This is different from `.select` because `.select` is a dataset transform (it returns a new dataset object) while `__getitem__` is an extraction method (it returns python dictionaries).
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/322/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/322/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/321
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/321/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/321/comments
https://api.github.com/repos/huggingface/datasets/issues/321/events
https://github.com/huggingface/datasets/issues/321
647,271,526
MDU6SXNzdWU2NDcyNzE1MjY=
321
ERROR:root:mwparserfromhell
{ "avatar_url": "https://avatars.githubusercontent.com/u/26505641?v=4", "events_url": "https://api.github.com/users/Shiro-LK/events{/privacy}", "followers_url": "https://api.github.com/users/Shiro-LK/followers", "following_url": "https://api.github.com/users/Shiro-LK/following{/other_user}", "gists_url": "https://api.github.com/users/Shiro-LK/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Shiro-LK", "id": 26505641, "login": "Shiro-LK", "node_id": "MDQ6VXNlcjI2NTA1NjQx", "organizations_url": "https://api.github.com/users/Shiro-LK/orgs", "received_events_url": "https://api.github.com/users/Shiro-LK/received_events", "repos_url": "https://api.github.com/users/Shiro-LK/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Shiro-LK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shiro-LK/subscriptions", "type": "User", "url": "https://api.github.com/users/Shiro-LK", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
10
2020-06-29 11:10:43+00:00
2022-02-14 15:21:46+00:00
2022-02-14 15:21:46+00:00
NONE
null
null
null
null
Hi, I am trying to download some wikipedia data but I got this error for spanish "es" (but there are maybe some others languages which have the same error I haven't tried all of them ). `ERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.` The code I have use was : `dataset = load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/321/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/321/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/320/comments
https://api.github.com/repos/huggingface/datasets/issues/320/events
https://github.com/huggingface/datasets/issues/320
647,188,167
MDU6SXNzdWU2NDcxODgxNjc=
320
Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
[]
null
2
2020-06-29 07:36:35+00:00
2020-06-29 14:44:42+00:00
2020-06-29 14:44:42+00:00
CONTRIBUTOR
null
null
null
null
Selecting `blog_authorship_corpus` in the nlp viewer throws the following error: ``` NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}] Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 172, in <module> dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None) File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func return get_or_create_cached_value() File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp-viewer/run.py", line 132, in get builder_instance.download_and_prepare() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 488, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) ``` @srush @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/320/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/320/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/319/comments
https://api.github.com/repos/huggingface/datasets/issues/319/events
https://github.com/huggingface/datasets/issues/319
646,792,487
MDU6SXNzdWU2NDY3OTI0ODc=
319
Nested sequences with dicts
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-27 23:45:17+00:00
2020-07-03 10:22:00+00:00
2020-07-03 10:22:00+00:00
CONTRIBUTOR
null
null
null
null
Am pretty much finished [adding a dataset](https://github.com/ghomasHudson/nlp/blob/DocRED/datasets/docred/docred.py) for [DocRED](https://github.com/thunlp/DocRED), but am getting an error when trying to add a nested `nlp.features.sequence(nlp.features.sequence({key:value,...}))`. The original data is in this format: ```python { 'title': "Title of wiki page", 'vertexSet': [ [ { 'name': "mention_name", 'sent_id': "mention in which sentence", 'pos': ["postion of mention in a sentence"], 'type': "NER_type"}, {another mention} ], [another entity] ] ... } ``` So to represent this I've attempted to write: ``` ... features=nlp.Features({ "title": nlp.Value("string"), "vertexSet": nlp.features.Sequence(nlp.features.Sequence({ "name": nlp.Value("string"), "sent_id": nlp.Value("int32"), "pos": nlp.features.Sequence(nlp.Value("int32")), "type": nlp.Value("string"), })), ... }), ... ``` This is giving me the error: ``` pyarrow.lib.ArrowTypeError: Could not convert [{'pos': [[0,2], [2,4], [3,5]], "type": ["ORG", "ORG", "ORG"], "name": ["Lark Force", "Lark Force", "Lark Force", "sent_id": [0, 3, 4]}..... with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` Do we expect the pyarrow stuff to break when doing this deeper nesting? I've checked that it still works when you do `nlp.features.Sequence(nlp.features.Sequence(nlp.Value("string"))` or `nlp.features.Sequence({key:value,...})` just not nested sequences with a dict. If it's not possible, I can always convert it to a shallower structure. I'd rather not change the DocRED authors' structure if I don't have to though.
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/319/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/319/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/318/comments
https://api.github.com/repos/huggingface/datasets/issues/318/events
https://github.com/huggingface/datasets/pull/318
646,682,840
MDExOlB1bGxSZXF1ZXN0NDQwOTExOTYy
318
Multitask
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson", "user_view_type": "public" }
[]
closed
false
null
[]
null
18
2020-06-27 13:27:29+00:00
2022-07-06 15:19:57+00:00
2022-07-06 15:19:57+00:00
CONTRIBUTOR
null
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/318.diff", "html_url": "https://github.com/huggingface/datasets/pull/318", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/318.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/318" }
Following our discussion in #217, I've implemented a first working version of `MultiDataset`. There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage. I've implemented many of the `nlp.Dataset` methods (cache_files, columns, nbytes, num_columns, num_rows, column_names, schema, shape). Some of the other methods are complicated as they change the number of examples. These raise `NotImplementedError`s at the moment. This will need some tests which I haven't written yet. There's definitely room for improvements but I think the general approach is sound.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/318/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/318/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/317/comments
https://api.github.com/repos/huggingface/datasets/issues/317/events
https://github.com/huggingface/datasets/issues/317
646,555,384
MDU6SXNzdWU2NDY1NTUzODQ=
317
Adding a dataset with multiple subtasks
{ "avatar_url": "https://avatars.githubusercontent.com/u/294483?v=4", "events_url": "https://api.github.com/users/erickrf/events{/privacy}", "followers_url": "https://api.github.com/users/erickrf/followers", "following_url": "https://api.github.com/users/erickrf/following{/other_user}", "gists_url": "https://api.github.com/users/erickrf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/erickrf", "id": 294483, "login": "erickrf", "node_id": "MDQ6VXNlcjI5NDQ4Mw==", "organizations_url": "https://api.github.com/users/erickrf/orgs", "received_events_url": "https://api.github.com/users/erickrf/received_events", "repos_url": "https://api.github.com/users/erickrf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/erickrf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/erickrf/subscriptions", "type": "User", "url": "https://api.github.com/users/erickrf", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-26 23:14:19+00:00
2020-10-27 15:36:52+00:00
2020-10-27 15:36:52+00:00
NONE
null
null
null
null
I intent to add the datasets of the MT Quality Estimation shared tasks to `nlp`. However, they have different subtasks -- such as word-level, sentence-level and document-level quality estimation, each of which having different language pairs, and some of the data reused in different subtasks. For example, in [QE 2019,](http://www.statmt.org/wmt19/qe-task.html) we had the same English-Russian and English-German data for word-level and sentence-level QE. I suppose these datasets could have both their word and sentence-level labels inside `nlp.Features`; but what about other subtasks? Should they be considered a different dataset altogether? I read the discussion on #217 but the case of QE seems a lot simpler.
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/317/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/317/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/316/comments
https://api.github.com/repos/huggingface/datasets/issues/316/events
https://github.com/huggingface/datasets/pull/316
646,366,450
MDExOlB1bGxSZXF1ZXN0NDQwNjY5NzY5
316
add AG News dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-26 16:11:58+00:00
2020-06-30 09:58:08+00:00
2020-06-30 08:31:55+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/316.diff", "html_url": "https://github.com/huggingface/datasets/pull/316", "merged_at": "2020-06-30T08:31:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/316.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/316" }
adds support for the AG-News topic classification dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/316/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/316/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/315/comments
https://api.github.com/repos/huggingface/datasets/issues/315/events
https://github.com/huggingface/datasets/issues/315
645,888,943
MDU6SXNzdWU2NDU4ODg5NDM=
315
[Question] Best way to batch a large dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen", "user_view_type": "public" }
[ { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
open
false
null
[]
null
11
2020-06-25 22:30:20+00:00
2020-10-27 15:38:17+00:00
NaT
CONTRIBUTOR
null
null
null
null
I'm training on large datasets such as Wikipedia and BookCorpus. Following the instructions in [the tutorial notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb), I see the following recommended for TensorFlow: ```python train_tf_dataset = train_tf_dataset.filter(remove_none_values, load_from_cache_file=False) columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions'] train_tf_dataset.set_format(type='tensorflow', columns=columns) features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]} labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])} labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1]) ### Question about this last line ### tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8) ``` This code works for something like WikiText-2. However, scaling up to WikiText-103, the last line takes 5-10 minutes to run. I assume it is because tf.data.Dataset.from_tensor_slices() is pulling everything into memory, not lazily loading. This approach won't scale up to datasets 25x larger such as Wikipedia. So I tried manual batching using `dataset.select()`: ```python idxs = np.random.randint(len(dataset), size=bsz) batch = dataset.select(idxs).map(lambda example: {"input_ids": tokenizer(example["text"])}) tf_batch = tf.constant(batch["ids"], dtype=tf.int64) ``` This appears to create a new Apache Arrow dataset with every batch I grab, and then tries to cache it. The runtime of `dataset.select([0, 1])` appears to be much worse than `dataset[:2]`. So using `select()` doesn't seem to be performant enough for a training loop. Is there a performant scalable way to lazily load batches of nlp Datasets?
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/315/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/315/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/314/comments
https://api.github.com/repos/huggingface/datasets/issues/314/events
https://github.com/huggingface/datasets/pull/314
645,461,174
MDExOlB1bGxSZXF1ZXN0NDM5OTM4MTMw
314
Fixed singlular very minor spelling error
{ "avatar_url": "https://avatars.githubusercontent.com/u/40696362?v=4", "events_url": "https://api.github.com/users/SchizoidBat/events{/privacy}", "followers_url": "https://api.github.com/users/SchizoidBat/followers", "following_url": "https://api.github.com/users/SchizoidBat/following{/other_user}", "gists_url": "https://api.github.com/users/SchizoidBat/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SchizoidBat", "id": 40696362, "login": "SchizoidBat", "node_id": "MDQ6VXNlcjQwNjk2MzYy", "organizations_url": "https://api.github.com/users/SchizoidBat/orgs", "received_events_url": "https://api.github.com/users/SchizoidBat/received_events", "repos_url": "https://api.github.com/users/SchizoidBat/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SchizoidBat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SchizoidBat/subscriptions", "type": "User", "url": "https://api.github.com/users/SchizoidBat", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-25 10:45:59+00:00
2020-06-26 08:46:41+00:00
2020-06-25 12:43:59+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/314.diff", "html_url": "https://github.com/huggingface/datasets/pull/314", "merged_at": "2020-06-25T12:43:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/314.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/314" }
An instance of "independantly" was changed to "independently". That's all.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/314/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/314/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/313/comments
https://api.github.com/repos/huggingface/datasets/issues/313/events
https://github.com/huggingface/datasets/pull/313
645,390,088
MDExOlB1bGxSZXF1ZXN0NDM5ODc4MDg5
313
Add MWSC
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }, { "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }, { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
null
1
2020-06-25 09:22:02+00:00
2020-06-30 08:28:11+00:00
2020-06-30 08:28:11+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/313.diff", "html_url": "https://github.com/huggingface/datasets/pull/313", "merged_at": "2020-06-30T08:28:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/313.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/313" }
Adding the [Modified Winograd Schema Challenge](https://github.com/salesforce/decaNLP/blob/master/local_data/schema.txt) dataset which formed part of the [decaNLP](http://decanlp.com/) benchmark. Not sure how much use people would find for it it outside of the benchmark, but it is general purpose. Code is heavily borrowed from the [decaNLP repo](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L773-L877). There's a few (possibly overly opinionated) design choices I made: - I used the train/test/dev split [buried in the decaNLP code](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L852-L855) - I split out each example into the 2 alternatives. Originally the data uses the format: ``` The city councilmen refused the demonstrators a permit because they [feared/advocated] violence. Who [feared/advocated] violence? councilmen/demonstrators ``` I split into the 2 variants: ``` The city councilmen refused the demonstrators a permit because they feared violence. Who feared violence? councilmen/demonstrators The city councilmen refused the demonstrators a permit because they advocated violence. Who advocated violence? councilmen/demonstrators ``` I can't see any use for having the options combined into a single example (splitting them is [the way decaNLP processes](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L846-L850)) them. You can't train on both versions with them combined, and splitting the examples later would be a pain to do. I think [winogrande.py](https://github.com/huggingface/nlp/blob/master/datasets/winogrande/winogrande.py) presents the data in this way? - I've not used the decaNLP framing (appending the options to the question e.g. `Who feared violence? -- councilmen or demonstrators?`) but left it more generic by adding the options as a new key: `"options":["councilmen","demonstrators"]` This should be an easy thing to change using `map` if needed by a specific application. Dataset is working as-is but if anyone has any thoughts/preferences on the design decisions here I'm definitely open to different choices.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/313/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/313/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/312/comments
https://api.github.com/repos/huggingface/datasets/issues/312/events
https://github.com/huggingface/datasets/issues/312
645,025,561
MDU6SXNzdWU2NDUwMjU1NjE=
312
[Feature request] Add `shard()` method to dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2020-06-24 22:48:33+00:00
2020-07-06 12:35:36+00:00
2020-07-06 12:35:36+00:00
CONTRIBUTOR
null
null
null
null
Currently, to shard a dataset into 10 pieces on different ranks, you can run ```python rank = 3 # for example size = 10 dataset = nlp.load_dataset('wikitext', 'wikitext-2-raw-v1', split=f"train[{rank*10}%:{(rank+1)*10}%]") ``` However, this breaks down if you have a number of ranks that doesn't divide cleanly into 100, such as 64 ranks. Is there interest in adding a method shard() that looks like this? ```python rank = 3 size = 64 dataset = nlp.load_dataset("wikitext", "wikitext-2-raw-v1", split="train").shard(rank=rank, size=size) ``` TensorFlow has a similar API: https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard. I'd be happy to contribute this code.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/312/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/312/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/311
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/311/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/311/comments
https://api.github.com/repos/huggingface/datasets/issues/311/events
https://github.com/huggingface/datasets/pull/311
645,013,131
MDExOlB1bGxSZXF1ZXN0NDM5NTQ3OTg0
311
Add qa_zre
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-24 22:17:22+00:00
2020-06-29 16:37:38+00:00
2020-06-29 16:37:38+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/311.diff", "html_url": "https://github.com/huggingface/datasets/pull/311", "merged_at": "2020-06-29T16:37:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/311.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/311" }
Adding the QA-ZRE dataset from ["Zero-Shot Relation Extraction via Reading Comprehension"](http://nlp.cs.washington.edu/zeroshot/). A common processing step seems to be replacing the `XXX` placeholder with the `subject`. I've left this out as it's something you could easily do with `map`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/311/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/311/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/310/comments
https://api.github.com/repos/huggingface/datasets/issues/310/events
https://github.com/huggingface/datasets/pull/310
644,806,720
MDExOlB1bGxSZXF1ZXN0NDM5MzY1MDg5
310
add wikisql
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-24 18:00:35+00:00
2020-06-25 12:32:25+00:00
2020-06-25 12:32:25+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/310.diff", "html_url": "https://github.com/huggingface/datasets/pull/310", "merged_at": "2020-06-25T12:32:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/310.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/310" }
Adding the [WikiSQL](https://github.com/salesforce/WikiSQL) dataset. Interesting things to note: - Have copied the function (`_convert_to_human_readable`) which converts the SQL query to a human-readable (string) format as this is what most people will want when actually using this dataset for NLP applications. - `conds` was originally a tuple but is converted to a dictionary to support differing types. Would be nice to add the logical_form metrics too at some point.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/310/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/310/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/309/comments
https://api.github.com/repos/huggingface/datasets/issues/309/events
https://github.com/huggingface/datasets/pull/309
644,783,822
MDExOlB1bGxSZXF1ZXN0NDM5MzQ1NzYz
309
Add narrative qa
{ "avatar_url": "https://avatars.githubusercontent.com/u/8019486?v=4", "events_url": "https://api.github.com/users/Varal7/events{/privacy}", "followers_url": "https://api.github.com/users/Varal7/followers", "following_url": "https://api.github.com/users/Varal7/following{/other_user}", "gists_url": "https://api.github.com/users/Varal7/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Varal7", "id": 8019486, "login": "Varal7", "node_id": "MDQ6VXNlcjgwMTk0ODY=", "organizations_url": "https://api.github.com/users/Varal7/orgs", "received_events_url": "https://api.github.com/users/Varal7/received_events", "repos_url": "https://api.github.com/users/Varal7/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Varal7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Varal7/subscriptions", "type": "User", "url": "https://api.github.com/users/Varal7", "user_view_type": "public" }
[]
closed
false
null
[]
null
11
2020-06-24 17:26:18+00:00
2020-09-03 09:02:10+00:00
2020-09-03 09:02:09+00:00
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/309.diff", "html_url": "https://github.com/huggingface/datasets/pull/309", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/309.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/309" }
Test cases for dummy data don't pass Only contains data for summaries (not whole story)
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/309/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/309/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/308
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/308/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/308/comments
https://api.github.com/repos/huggingface/datasets/issues/308/events
https://github.com/huggingface/datasets/pull/308
644,195,251
MDExOlB1bGxSZXF1ZXN0NDM4ODYyMzYy
308
Specify utf-8 encoding for MRPC files
{ "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "events_url": "https://api.github.com/users/patpizio/events{/privacy}", "followers_url": "https://api.github.com/users/patpizio/followers", "following_url": "https://api.github.com/users/patpizio/following{/other_user}", "gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patpizio", "id": 15801338, "login": "patpizio", "node_id": "MDQ6VXNlcjE1ODAxMzM4", "organizations_url": "https://api.github.com/users/patpizio/orgs", "received_events_url": "https://api.github.com/users/patpizio/received_events", "repos_url": "https://api.github.com/users/patpizio/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patpizio/subscriptions", "type": "User", "url": "https://api.github.com/users/patpizio", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-23 22:44:36+00:00
2020-06-25 12:52:21+00:00
2020-06-25 12:16:10+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/308.diff", "html_url": "https://github.com/huggingface/datasets/pull/308", "merged_at": "2020-06-25T12:16:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/308.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/308" }
Fixes #307, again probably a Windows-related issue.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/308/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/308/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/307
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/307/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/307/comments
https://api.github.com/repos/huggingface/datasets/issues/307/events
https://github.com/huggingface/datasets/issues/307
644,187,262
MDU6SXNzdWU2NDQxODcyNjI=
307
Specify encoding for MRPC
{ "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "events_url": "https://api.github.com/users/patpizio/events{/privacy}", "followers_url": "https://api.github.com/users/patpizio/followers", "following_url": "https://api.github.com/users/patpizio/following{/other_user}", "gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patpizio", "id": 15801338, "login": "patpizio", "node_id": "MDQ6VXNlcjE1ODAxMzM4", "organizations_url": "https://api.github.com/users/patpizio/orgs", "received_events_url": "https://api.github.com/users/patpizio/received_events", "repos_url": "https://api.github.com/users/patpizio/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patpizio/subscriptions", "type": "User", "url": "https://api.github.com/users/patpizio", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-23 22:24:49+00:00
2020-06-25 12:16:09+00:00
2020-06-25 12:16:09+00:00
CONTRIBUTOR
null
null
null
null
Same as #242, but with MRPC: on Windows, I get a `UnicodeDecodeError` when I try to download the dataset: ```python dataset = nlp.load_dataset('glue', 'mrpc') ``` ```python Downloading and preparing dataset glue/mrpc (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\Python\.cache\huggingface\datasets\glue\mrpc\1.0.0... --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in incomplete_dir(dirname) 369 try: --> 370 yield tmp_dir 371 if os.path.isdir(dirname): ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications --> 431 self._download_and_prepare( 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 482 # Prepare split will record examples associated to the split --> 483 self._prepare_split(split_generator, **prepare_split_kwargs) 484 except OSError: ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _prepare_split(self, split_generator) 663 generator = self._generate_examples(**split_generator.gen_kwargs) --> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): 665 example = self.info.features.encode_example(record) ~\Miniconda3\envs\nlp\lib\site-packages\tqdm\notebook.py in __iter__(self, *args, **kwargs) 217 try: --> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): 219 # return super(tqdm...) will not catch exception ~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable: 1130 yield obj ~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_examples(self, data_file, split, mrpc_files) 514 examples = self._generate_example_mrpc_files(mrpc_files=mrpc_files, split=split) --> 515 for example in examples: 516 yield example["idx"], example ~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_example_mrpc_files(self, mrpc_files, split) 576 reader = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE) --> 577 for n, row in enumerate(reader): 578 is_row_in_dev = [row["#1 ID"], row["#2 ID"]] in dev_ids ~\Miniconda3\envs\nlp\lib\csv.py in __next__(self) 110 self.fieldnames --> 111 row = next(self.reader) 112 self.line_num = self.reader.line_num ~\Miniconda3\envs\nlp\lib\encodings\cp1252.py in decode(self, input, final) 22 def decode(self, input, final=False): ---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0] 24 UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1180: character maps to <undefined> ``` The fix is the same: specify `utf-8` encoding when opening the file. The previous fix didn't work as MRPC's download process is different from the others in GLUE. I am going to propose a new PR :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/307/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/307/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/306/comments
https://api.github.com/repos/huggingface/datasets/issues/306/events
https://github.com/huggingface/datasets/pull/306
644,176,078
MDExOlB1bGxSZXF1ZXN0NDM4ODQ2MTI3
306
add pg19 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/108653?v=4", "events_url": "https://api.github.com/users/lucidrains/events{/privacy}", "followers_url": "https://api.github.com/users/lucidrains/followers", "following_url": "https://api.github.com/users/lucidrains/following{/other_user}", "gists_url": "https://api.github.com/users/lucidrains/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucidrains", "id": 108653, "login": "lucidrains", "node_id": "MDQ6VXNlcjEwODY1Mw==", "organizations_url": "https://api.github.com/users/lucidrains/orgs", "received_events_url": "https://api.github.com/users/lucidrains/received_events", "repos_url": "https://api.github.com/users/lucidrains/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucidrains/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucidrains/subscriptions", "type": "User", "url": "https://api.github.com/users/lucidrains", "user_view_type": "public" }
[]
closed
false
null
[]
null
12
2020-06-23 22:03:52+00:00
2020-07-06 07:55:59+00:00
2020-07-06 07:55:59+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/306.diff", "html_url": "https://github.com/huggingface/datasets/pull/306", "merged_at": "2020-07-06T07:55:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/306.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/306" }
https://github.com/huggingface/nlp/issues/274 Add functioning PG19 dataset with dummy data `cos_e.py` was just auto-linted by `make style`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/306/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/306/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/305/comments
https://api.github.com/repos/huggingface/datasets/issues/305/events
https://github.com/huggingface/datasets/issues/305
644,148,149
MDU6SXNzdWU2NDQxNDgxNDk=
305
Importing downloaded package repository fails
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite", "user_view_type": "public" }
[ { "color": "25b21e", "default": false, "description": "A bug in a metric script", "id": 2067393914, "name": "metric bug", "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug" } ]
closed
false
null
[]
null
0
2020-06-23 21:09:05+00:00
2020-07-30 16:44:23+00:00
2020-07-30 16:44:23+00:00
MEMBER
null
null
null
null
The `get_imports` function in `src/nlp/load.py` has a feature to download a package as a zip archive of the github repository and import functions from the unpacked directory. This is used for example in the `metrics/coval.py` file, and would be useful to add BLEURT (@ankparikh). Currently however, the code seems to have trouble with imports within the package. For example: ``` import nlp coval = nlp.load_metric('coval') ``` yields: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/yacine/Code/nlp/src/nlp/load.py", line 432, in load_metric metric_cls = import_main_class(module_path, dataset=False) File "/home/yacine/Code/nlp/src/nlp/load.py", line 57, in import_main_class module = importlib.import_module(module_path) File "/home/yacine/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval.py", line 21, in <module> from .coval_backend.conll import reader # From: https://github.com/ns-moosavi/coval File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval_backend/conll/reader.py", line 2, in <module> from conll import mention ModuleNotFoundError: No module named 'conll' ``` Not sure what the fix would be there.
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/305/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/305/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/304
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/304/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/304/comments
https://api.github.com/repos/huggingface/datasets/issues/304/events
https://github.com/huggingface/datasets/issues/304
644,091,970
MDU6SXNzdWU2NDQwOTE5NzA=
304
Problem while printing doc string when instantiating multiple metrics.
{ "avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4", "events_url": "https://api.github.com/users/codehunk628/events{/privacy}", "followers_url": "https://api.github.com/users/codehunk628/followers", "following_url": "https://api.github.com/users/codehunk628/following{/other_user}", "gists_url": "https://api.github.com/users/codehunk628/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/codehunk628", "id": 51091425, "login": "codehunk628", "node_id": "MDQ6VXNlcjUxMDkxNDI1", "organizations_url": "https://api.github.com/users/codehunk628/orgs", "received_events_url": "https://api.github.com/users/codehunk628/received_events", "repos_url": "https://api.github.com/users/codehunk628/repos", "site_admin": false, "starred_url": "https://api.github.com/users/codehunk628/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codehunk628/subscriptions", "type": "User", "url": "https://api.github.com/users/codehunk628", "user_view_type": "public" }
[ { "color": "25b21e", "default": false, "description": "A bug in a metric script", "id": 2067393914, "name": "metric bug", "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug" } ]
closed
false
null
[]
null
0
2020-06-23 19:32:05+00:00
2020-07-22 09:50:58+00:00
2020-07-22 09:50:58+00:00
CONTRIBUTOR
null
null
null
null
When I load more than one metric and try to print doc string of a particular metric,. It shows the doc strings of all imported metric one after the other which looks quite confusing and clumsy. Attached [Colab](https://colab.research.google.com/drive/13H0ZgyQ2se0mqJ2yyew0bNEgJuHaJ8H3?usp=sharing) Notebook for problem clarification..
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/304/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/304/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/303
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/303/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/303/comments
https://api.github.com/repos/huggingface/datasets/issues/303/events
https://github.com/huggingface/datasets/pull/303
643,912,464
MDExOlB1bGxSZXF1ZXN0NDM4NjI3Nzcw
303
allow to move files across file systems
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-23 14:56:08+00:00
2020-06-23 15:08:44+00:00
2020-06-23 15:08:43+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/303.diff", "html_url": "https://github.com/huggingface/datasets/pull/303", "merged_at": "2020-06-23T15:08:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/303.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/303" }
Users are allowed to use the `cache_dir` that they want. Therefore it can happen that we try to move files across filesystems. We were using `os.rename` that doesn't allow that, so I changed some of them to `shutil.move`. This should fix #301
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/303/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/303/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/302/comments
https://api.github.com/repos/huggingface/datasets/issues/302/events
https://github.com/huggingface/datasets/issues/302
643,910,418
MDU6SXNzdWU2NDM5MTA0MTg=
302
Question - Sign Language Datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
closed
false
null
[]
null
3
2020-06-23 14:53:40+00:00
2020-11-25 11:25:33+00:00
2020-11-25 11:25:33+00:00
CONTRIBUTOR
null
null
null
null
An emerging field in NLP is SLP - sign language processing. I was wondering about adding datasets here, specifically because it's shaping up to be large and easily usable. The metrics for sign language to text translation are the same. So, what do you think about (me, or others) adding datasets here? An example dataset would be [RWTH-PHOENIX-Weather 2014 T](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/) For every item in the dataset, the data object includes: 1. video_path - path to mp4 file 2. pose_path - a path to `.pose` file with human pose landmarks 3. openpose_path - a path to a `.json` file with human pose landmarks 4. gloss - string 5. text - string 6. video_metadata - height, width, frames, framerate ------ To make it a tad more complicated - what if sign language libraries add requirements to `nlp`? for example, sign language is commonly annotated using `ilex`, `eaf`, or `srt` files, which are all loadable as text, but there is no reason for the dataset to parse that file by itself, if libraries exist to do so.
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/302/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/302/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/301/comments
https://api.github.com/repos/huggingface/datasets/issues/301/events
https://github.com/huggingface/datasets/issues/301
643,763,525
MDU6SXNzdWU2NDM3NjM1MjU=
301
Setting cache_dir gives error on wikipedia download
{ "avatar_url": "https://avatars.githubusercontent.com/u/33862536?v=4", "events_url": "https://api.github.com/users/hallvagi/events{/privacy}", "followers_url": "https://api.github.com/users/hallvagi/followers", "following_url": "https://api.github.com/users/hallvagi/following{/other_user}", "gists_url": "https://api.github.com/users/hallvagi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hallvagi", "id": 33862536, "login": "hallvagi", "node_id": "MDQ6VXNlcjMzODYyNTM2", "organizations_url": "https://api.github.com/users/hallvagi/orgs", "received_events_url": "https://api.github.com/users/hallvagi/received_events", "repos_url": "https://api.github.com/users/hallvagi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hallvagi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hallvagi/subscriptions", "type": "User", "url": "https://api.github.com/users/hallvagi", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2020-06-23 11:31:44+00:00
2020-06-24 07:05:07+00:00
2020-06-24 07:05:07+00:00
NONE
null
null
null
null
First of all thank you for a super handy library! I'd like to download large files to a specific drive so I set `cache_dir=my_path`. This works fine with e.g. imdb and squad. But on wikipedia I get an error: ``` nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=my_path) ``` ``` OSError Traceback (most recent call last) <ipython-input-2-23551344d7bc> in <module> 1 import nlp ----> 2 nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=path) ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 385 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir): 386 reader = ArrowReader(self._cache_dir, self.info) --> 387 reader.download_from_hf_gcs(self._cache_dir, self._relative_data_dir(with_version=True)) 388 downloaded_info = DatasetInfo.from_directory(self._cache_dir) 389 self.info.update(downloaded_info) ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/arrow_reader.py in download_from_hf_gcs(self, cache_dir, relative_data_dir) 231 remote_dataset_info = os.path.join(remote_cache_dir, "dataset_info.json") 232 downloaded_dataset_info = cached_path(remote_dataset_info) --> 233 os.rename(downloaded_dataset_info, os.path.join(cache_dir, "dataset_info.json")) 234 if self._info is not None: 235 self._info.update(self._info.from_directory(cache_dir)) OSError: [Errno 18] Invalid cross-device link: '/home/local/NTU/nn/.cache/huggingface/datasets/025fa4fd4f04aaafc9e939260fbc8f0bb190ce14c61310c8ae1ddd1dcb31f88c.9637f367b6711a79ca478be55fe6989b8aea4941b7ef7adc67b89ff403020947' -> '/data/nn/nlp/wikipedia/20200501.de/1.0.0.incomplete/dataset_info.json' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/33862536?v=4", "events_url": "https://api.github.com/users/hallvagi/events{/privacy}", "followers_url": "https://api.github.com/users/hallvagi/followers", "following_url": "https://api.github.com/users/hallvagi/following{/other_user}", "gists_url": "https://api.github.com/users/hallvagi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hallvagi", "id": 33862536, "login": "hallvagi", "node_id": "MDQ6VXNlcjMzODYyNTM2", "organizations_url": "https://api.github.com/users/hallvagi/orgs", "received_events_url": "https://api.github.com/users/hallvagi/received_events", "repos_url": "https://api.github.com/users/hallvagi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hallvagi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hallvagi/subscriptions", "type": "User", "url": "https://api.github.com/users/hallvagi", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/301/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/301/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/300/comments
https://api.github.com/repos/huggingface/datasets/issues/300/events
https://github.com/huggingface/datasets/pull/300
643,688,304
MDExOlB1bGxSZXF1ZXN0NDM4NDQ4Mjk1
300
Fix bertscore references
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-23 09:38:59+00:00
2020-06-23 14:47:38+00:00
2020-06-23 14:47:37+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/300.diff", "html_url": "https://github.com/huggingface/datasets/pull/300", "merged_at": "2020-06-23T14:47:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/300.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/300" }
I added some type checking for metrics. There was an issue where a metric could interpret a string a a list. A `ValueError` is raised if a string is given instead of a list. Moreover I added support for both strings and lists of strings for `references` in `bertscore`, as it is the case in the original code. Both ways work: ``` import nlp scorer = nlp.load_metric("bertscore") with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): scorer.add(lp, [lg]) score = scorer.compute(lang="en") ``` ``` import nlp scorer = nlp.load_metric("bertscore") with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): scorer.add(lp, lg) score = scorer.compute(lang="en") ``` This should fix #295 and #238
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/300/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/300/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/299/comments
https://api.github.com/repos/huggingface/datasets/issues/299/events
https://github.com/huggingface/datasets/pull/299
643,611,557
MDExOlB1bGxSZXF1ZXN0NDM4Mzg0NDgw
299
remove some print in snli file
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-23 07:46:06+00:00
2020-06-23 08:10:46+00:00
2020-06-23 08:10:44+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/299.diff", "html_url": "https://github.com/huggingface/datasets/pull/299", "merged_at": "2020-06-23T08:10:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/299.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/299" }
This PR removes unwanted `print` statements in some files such as `snli.py`
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/299/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/299/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/298/comments
https://api.github.com/repos/huggingface/datasets/issues/298/events
https://github.com/huggingface/datasets/pull/298
643,603,804
MDExOlB1bGxSZXF1ZXN0NDM4Mzc4MDM4
298
Add searchable datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
8
2020-06-23 07:33:03+00:00
2020-06-26 07:50:44+00:00
2020-06-26 07:50:43+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/298.diff", "html_url": "https://github.com/huggingface/datasets/pull/298", "merged_at": "2020-06-26T07:50:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/298.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/298" }
# Better support for Numpy format + Add Indexed Datasets I was working on adding Indexed Datasets but in the meantime I had to also add more support for Numpy arrays in the lib. ## Better support for Numpy format New features: - New fast method to convert Numpy arrays from Arrow structure (up to x100 speed up) using Pandas. - Allow to output Numpy arrays in batched `.map`, which was the only missing part to fully support Numpy arrays. Pandas offers fast zero-copy Numpy arrays conversion from Arrow structures. Using it we can speed up the reading of memory-mapped Numpy array stored in Arrow format. With these changes you can easily compute embeddings of texts using `.map()`. For example: ```python def embed(text): tokenized_example = tokenizer.encode(text, return_tensors="pt") embeddings = bert_encoder(tokenized_examples).numpy() return embeddings dset_with_embeddings = dset.map(lambda example: {"embeddings": embed(example["text])}) ``` And then reading the embeddings from the arrow format is be very fast. PS1: Note that right now only 1d arrays are supported. PS2: It seems possible to do without pandas but it will require more _trickery_. PS3: I did a simple benchmark with google colab that you can view here: https://colab.research.google.com/drive/1QlLTR6LRwYOKGJ-hTHmHyolE3wJzvfFg?usp=sharing ## Add Indexed Datasets For many retrieval tasks it is convenient to index a dataset to be able to run fast queries. For example for models like DPR, REALM, RAG etc. that are models for Open Domain QA, the retrieval step is very important. Therefore I added two ways to add an index to a column of a dataset: 1) You can index it using a Dense Index like Faiss. It is used to index vectors. Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. 2) You can index it using a Sparse Index like Elasticsearch. It is used to index text and run queries based on BM25 similarity. Example of usage: ```python ds = nlp.load_dataset('crime_and_punish', split='train') ds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']})) # `embed` outputs a `np.array` ds_with_embeddings.add_vector_index(column='embeddings') scores, retrieved_examples = ds_with_embeddings.get_nearest(column='embeddings', query=embed('my new query'), k=10) ``` ```python ds = nlp.load_dataset('crime_and_punish', split='train') es_client = elasticsearch.Elasticsearch() ds.add_text_index(column='line', es_client=es_client, index_name="my_es_index") scores, retrieved_examples = ds.get_nearest(column='line', query='my new query', k=10) ``` PS4: Faiss allows to specify many options for the [index](https://github.com/facebookresearch/faiss/wiki/The-index-factory) and for [GPU settings](https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU). I made sure that the user has full control over those settings. ## Tests I added tests for Faiss, Elasticsearch and indexed datasets. I had to edit the CI config because all the test scripts were not being run by CircleCI. ------------------ I'd be really happy to have some feedbacks :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/298/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/298/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/297/comments
https://api.github.com/repos/huggingface/datasets/issues/297/events
https://github.com/huggingface/datasets/issues/297
643,444,625
MDU6SXNzdWU2NDM0NDQ2MjU=
297
Error in Demo for Specific Datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/60150701?v=4", "events_url": "https://api.github.com/users/s-jse/events{/privacy}", "followers_url": "https://api.github.com/users/s-jse/followers", "following_url": "https://api.github.com/users/s-jse/following{/other_user}", "gists_url": "https://api.github.com/users/s-jse/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/s-jse", "id": 60150701, "login": "s-jse", "node_id": "MDQ6VXNlcjYwMTUwNzAx", "organizations_url": "https://api.github.com/users/s-jse/orgs", "received_events_url": "https://api.github.com/users/s-jse/received_events", "repos_url": "https://api.github.com/users/s-jse/repos", "site_admin": false, "starred_url": "https://api.github.com/users/s-jse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/s-jse/subscriptions", "type": "User", "url": "https://api.github.com/users/s-jse", "user_view_type": "public" }
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
[]
null
3
2020-06-23 00:38:42+00:00
2020-07-17 17:43:06+00:00
2020-07-17 17:43:06+00:00
NONE
null
null
null
null
Selecting `natural_questions` or `newsroom` dataset in the online demo results in an error similar to the following. ![image](https://user-images.githubusercontent.com/60150701/85347842-ac861900-b4ae-11ea-98c4-a53a00934783.png)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/297/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/297/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/296/comments
https://api.github.com/repos/huggingface/datasets/issues/296/events
https://github.com/huggingface/datasets/issues/296
643,423,717
MDU6SXNzdWU2NDM0MjM3MTc=
296
snli -1 labels
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12", "user_view_type": "public" }
[]
closed
false
null
[]
null
4
2020-06-22 23:33:30+00:00
2020-06-23 14:41:59+00:00
2020-06-23 14:41:58+00:00
CONTRIBUTOR
null
null
null
null
I'm trying to train a model on the SNLI dataset. Why does it have so many -1 labels? ``` import nlp from collections import Counter data = nlp.load_dataset('snli')['train'] print(Counter(data['label'])) Counter({0: 183416, 2: 183187, 1: 182764, -1: 785}) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/296/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/296/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/295/comments
https://api.github.com/repos/huggingface/datasets/issues/295/events
https://github.com/huggingface/datasets/issues/295
643,245,412
MDU6SXNzdWU2NDMyNDU0MTI=
295
Improve input warning for evaluation metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/19514537?v=4", "events_url": "https://api.github.com/users/Tiiiger/events{/privacy}", "followers_url": "https://api.github.com/users/Tiiiger/followers", "following_url": "https://api.github.com/users/Tiiiger/following{/other_user}", "gists_url": "https://api.github.com/users/Tiiiger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Tiiiger", "id": 19514537, "login": "Tiiiger", "node_id": "MDQ6VXNlcjE5NTE0NTM3", "organizations_url": "https://api.github.com/users/Tiiiger/orgs", "received_events_url": "https://api.github.com/users/Tiiiger/received_events", "repos_url": "https://api.github.com/users/Tiiiger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Tiiiger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tiiiger/subscriptions", "type": "User", "url": "https://api.github.com/users/Tiiiger", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-22 17:28:57+00:00
2020-06-23 14:47:37+00:00
2020-06-23 14:47:37+00:00
NONE
null
null
null
null
Hi, I am the author of `bert_score`. Recently, we received [ an issue ](https://github.com/Tiiiger/bert_score/issues/62) reporting a problem in using `bert_score` from the `nlp` package (also see #238 in this repo). After looking into this, I realized that the problem arises from the format `nlp.Metric` takes input. Here is a minimal example: ```python import nlp scorer = nlp.load_metric("bertscore") with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): scorer.add(lp, lg) score = scorer.compute(lang="en") ``` The problem in the above code is that `scorer.add()` expects a list of strings as input for the references. As a result, the `scorer` here would take a list of characters in `lg` to be the references. The correct implementation would be calling ```python scorer.add(lp, [lg]) ``` I just want to raise this issue to you to prevent future user errors of a similar kind. I assume some simple type checking can prevent this from happening? Thanks!
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/295/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/295/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/294/comments
https://api.github.com/repos/huggingface/datasets/issues/294/events
https://github.com/huggingface/datasets/issues/294
643,181,179
MDU6SXNzdWU2NDMxODExNzk=
294
Cannot load arxiv dataset on MacOS?
{ "avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4", "events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}", "followers_url": "https://api.github.com/users/JohnGiorgi/followers", "following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}", "gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JohnGiorgi", "id": 8917831, "login": "JohnGiorgi", "node_id": "MDQ6VXNlcjg5MTc4MzE=", "organizations_url": "https://api.github.com/users/JohnGiorgi/orgs", "received_events_url": "https://api.github.com/users/JohnGiorgi/received_events", "repos_url": "https://api.github.com/users/JohnGiorgi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions", "type": "User", "url": "https://api.github.com/users/JohnGiorgi", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
4
2020-06-22 15:46:55+00:00
2020-06-30 15:25:10+00:00
2020-06-30 15:25:10+00:00
CONTRIBUTOR
null
null
null
null
I am having trouble loading the `"arxiv"` config from the `"scientific_papers"` dataset on MacOS. When I try loading the dataset with: ```python arxiv = nlp.load_dataset("scientific_papers", "arxiv") ``` I get the following stack trace: ```bash JSONDecodeError Traceback (most recent call last) <ipython-input-2-8e00c55d5a59> in <module> ----> 1 arxiv = nlp.load_dataset("scientific_papers", "arxiv") ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications 431 self._download_and_prepare( --> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 433 ) 434 # Sync info ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 481 try: 482 # Prepare split will record examples associated to the split --> 483 self._prepare_split(split_generator, **prepare_split_kwargs) 484 except OSError: 485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _prepare_split(self, split_generator) 662 663 generator = self._generate_examples(**split_generator.gen_kwargs) --> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): 665 example = self.info.features.encode_example(record) 666 writer.write(example) ~/miniconda3/envs/t2t/lib/python3.7/site-packages/tqdm/std.py in __iter__(self) 1106 fp_write=getattr(self.fp, 'write', sys.stderr.write)) 1107 -> 1108 for obj in iterable: 1109 yield obj 1110 # Update and possibly print the progressbar. ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/datasets/scientific_papers/107a416c0e1958cb846f5934b5aae292f7884a5b27e86af3f3ef1a093e058bbc/scientific_papers.py in _generate_examples(self, path) 114 # "section_names": list[str], list of section names. 115 # "sections": list[list[str]], list of sections (list of paragraphs) --> 116 d = json.loads(line) 117 summary = "\n".join(d["abstract_text"]) 118 # In original paper, <S> and </S> are not used in vocab during training ~/miniconda3/envs/t2t/lib/python3.7/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 346 parse_int is None and parse_float is None and 347 parse_constant is None and object_pairs_hook is None and not kw): --> 348 return _default_decoder.decode(s) 349 if cls is None: 350 cls = JSONDecoder ~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in decode(self, s, _w) 335 336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() 339 if end != len(s): ~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in raw_decode(self, s, idx) 351 """ 352 try: --> 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: 355 raise JSONDecodeError("Expecting value", s, err.value) from None JSONDecodeError: Unterminated string starting at: line 1 column 46983 (char 46982) 163502 examples [02:10, 2710.68 examples/s] ``` I am not sure how to trace back to the specific JSON file that has the "Unterminated string". Also, I do not get this error on colab so I suspect it may be MacOS specific. Copy pasting the relevant lines from `transformers-cli env` below: - Platform: Darwin-19.5.0-x86_64-i386-64bit - Python version: 3.7.5 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) Any ideas?
{ "avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4", "events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}", "followers_url": "https://api.github.com/users/JohnGiorgi/followers", "following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}", "gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JohnGiorgi", "id": 8917831, "login": "JohnGiorgi", "node_id": "MDQ6VXNlcjg5MTc4MzE=", "organizations_url": "https://api.github.com/users/JohnGiorgi/orgs", "received_events_url": "https://api.github.com/users/JohnGiorgi/received_events", "repos_url": "https://api.github.com/users/JohnGiorgi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions", "type": "User", "url": "https://api.github.com/users/JohnGiorgi", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/294/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/294/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/293/comments
https://api.github.com/repos/huggingface/datasets/issues/293/events
https://github.com/huggingface/datasets/pull/293
642,942,182
MDExOlB1bGxSZXF1ZXN0NDM3ODM1ODI4
293
Don't test community datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-22 10:15:33+00:00
2020-06-22 11:07:00+00:00
2020-06-22 11:06:59+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/293.diff", "html_url": "https://github.com/huggingface/datasets/pull/293", "merged_at": "2020-06-22T11:06:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/293.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/293" }
This PR disables testing for community datasets on aws. It should fix the CI that is currently failing.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/293/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/293/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/292/comments
https://api.github.com/repos/huggingface/datasets/issues/292/events
https://github.com/huggingface/datasets/pull/292
642,897,797
MDExOlB1bGxSZXF1ZXN0NDM3Nzk4NTM2
292
Update metadata for x_stance dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/5830820?v=4", "events_url": "https://api.github.com/users/jvamvas/events{/privacy}", "followers_url": "https://api.github.com/users/jvamvas/followers", "following_url": "https://api.github.com/users/jvamvas/following{/other_user}", "gists_url": "https://api.github.com/users/jvamvas/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jvamvas", "id": 5830820, "login": "jvamvas", "node_id": "MDQ6VXNlcjU4MzA4MjA=", "organizations_url": "https://api.github.com/users/jvamvas/orgs", "received_events_url": "https://api.github.com/users/jvamvas/received_events", "repos_url": "https://api.github.com/users/jvamvas/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jvamvas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jvamvas/subscriptions", "type": "User", "url": "https://api.github.com/users/jvamvas", "user_view_type": "public" }
[]
closed
false
null
[]
null
3
2020-06-22 09:13:26+00:00
2020-06-23 08:07:24+00:00
2020-06-23 08:07:24+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/292.diff", "html_url": "https://github.com/huggingface/datasets/pull/292", "merged_at": "2020-06-23T08:07:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/292.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/292" }
Thank you for featuring the x_stance dataset in your library. This PR updates some metadata: - Citation: Replace preprint with proceedings - URL: Use a URL with long-term availability
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/292/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/292/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/291/comments
https://api.github.com/repos/huggingface/datasets/issues/291/events
https://github.com/huggingface/datasets/pull/291
642,688,450
MDExOlB1bGxSZXF1ZXN0NDM3NjM1NjMy
291
break statement not required
{ "avatar_url": "https://avatars.githubusercontent.com/u/12967587?v=4", "events_url": "https://api.github.com/users/mayurnewase/events{/privacy}", "followers_url": "https://api.github.com/users/mayurnewase/followers", "following_url": "https://api.github.com/users/mayurnewase/following{/other_user}", "gists_url": "https://api.github.com/users/mayurnewase/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mayurnewase", "id": 12967587, "login": "mayurnewase", "node_id": "MDQ6VXNlcjEyOTY3NTg3", "organizations_url": "https://api.github.com/users/mayurnewase/orgs", "received_events_url": "https://api.github.com/users/mayurnewase/received_events", "repos_url": "https://api.github.com/users/mayurnewase/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mayurnewase/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayurnewase/subscriptions", "type": "User", "url": "https://api.github.com/users/mayurnewase", "user_view_type": "public" }
[]
closed
false
null
[]
null
3
2020-06-22 01:40:55+00:00
2020-06-23 17:57:58+00:00
2020-06-23 09:37:02+00:00
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/291.diff", "html_url": "https://github.com/huggingface/datasets/pull/291", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/291.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/291" }
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/291/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/291/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/290/comments
https://api.github.com/repos/huggingface/datasets/issues/290/events
https://github.com/huggingface/datasets/issues/290
641,978,286
MDU6SXNzdWU2NDE5NzgyODY=
290
ConnectionError - Eli5 dataset download
{ "avatar_url": "https://avatars.githubusercontent.com/u/8490096?v=4", "events_url": "https://api.github.com/users/JovanNj/events{/privacy}", "followers_url": "https://api.github.com/users/JovanNj/followers", "following_url": "https://api.github.com/users/JovanNj/following{/other_user}", "gists_url": "https://api.github.com/users/JovanNj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JovanNj", "id": 8490096, "login": "JovanNj", "node_id": "MDQ6VXNlcjg0OTAwOTY=", "organizations_url": "https://api.github.com/users/JovanNj/orgs", "received_events_url": "https://api.github.com/users/JovanNj/received_events", "repos_url": "https://api.github.com/users/JovanNj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JovanNj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JovanNj/subscriptions", "type": "User", "url": "https://api.github.com/users/JovanNj", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2020-06-19 13:40:33+00:00
2020-06-20 13:22:24+00:00
2020-06-20 13:22:24+00:00
NONE
null
null
null
null
Hi, I have a problem with downloading Eli5 dataset. When typing `nlp.load_dataset('eli5')`, I get ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/eli5/LFQA_reddit/1.0.0/explain_like_im_five-train_eli5.arrow I would appreciate if you could help me with this issue.
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/290/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/290/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/289/comments
https://api.github.com/repos/huggingface/datasets/issues/289/events
https://github.com/huggingface/datasets/pull/289
641,934,194
MDExOlB1bGxSZXF1ZXN0NDM3MDc0MTM3
289
update xsum
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
[]
closed
false
null
[]
null
3
2020-06-19 12:28:32+00:00
2020-06-22 13:27:26+00:00
2020-06-22 07:20:07+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/289.diff", "html_url": "https://github.com/huggingface/datasets/pull/289", "merged_at": "2020-06-22T07:20:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/289.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/289" }
This PR makes the following update to the xsum dataset: - Manual download is not required anymore - dataset can be loaded as follow: `nlp.load_dataset('xsum')` **Important** Instead of using on outdated url to download the data: "https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json" a more up-to-date url stored here: https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz is used , so that the user does not need to manually download the data anymore. There might be slight breaking changes here for xsum.
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/289/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/289/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/288/comments
https://api.github.com/repos/huggingface/datasets/issues/288/events
https://github.com/huggingface/datasets/issues/288
641,888,610
MDU6SXNzdWU2NDE4ODg2MTA=
288
Error at the first example in README: AttributeError: module 'dill' has no attribute '_dill'
{ "avatar_url": "https://avatars.githubusercontent.com/u/14964542?v=4", "events_url": "https://api.github.com/users/wutong8023/events{/privacy}", "followers_url": "https://api.github.com/users/wutong8023/followers", "following_url": "https://api.github.com/users/wutong8023/following{/other_user}", "gists_url": "https://api.github.com/users/wutong8023/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wutong8023", "id": 14964542, "login": "wutong8023", "node_id": "MDQ6VXNlcjE0OTY0NTQy", "organizations_url": "https://api.github.com/users/wutong8023/orgs", "received_events_url": "https://api.github.com/users/wutong8023/received_events", "repos_url": "https://api.github.com/users/wutong8023/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wutong8023/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wutong8023/subscriptions", "type": "User", "url": "https://api.github.com/users/wutong8023", "user_view_type": "public" }
[]
closed
false
null
[]
null
5
2020-06-19 11:01:22+00:00
2020-06-21 09:05:11+00:00
2020-06-21 09:05:11+00:00
NONE
null
null
null
null
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:469: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:470: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:471: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:472: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:473: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:476: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6 return f(*args, **kwds) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters Traceback (most recent call last): File "/Users/parasol_tree/Resource/019 - Github/AcademicEnglishToolkit /test.py", line 7, in <module> import nlp File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/__init__.py", line 27, in <module> from .arrow_dataset import Dataset File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/arrow_dataset.py", line 31, in <module> from nlp.utils.py_utils import dumps File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/__init__.py", line 20, in <module> from .download_manager import DownloadManager, GenerateMode File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/download_manager.py", line 25, in <module> from .py_utils import flatten_nested, map_nested, size_str File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/py_utils.py", line 244, in <module> class Pickler(dill.Pickler): File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/py_utils.py", line 247, in Pickler dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy()) AttributeError: module 'dill' has no attribute '_dill'
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/288/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/288/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/287/comments
https://api.github.com/repos/huggingface/datasets/issues/287/events
https://github.com/huggingface/datasets/pull/287
641,800,227
MDExOlB1bGxSZXF1ZXN0NDM2OTY0NTg0
287
fix squad_v2 metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-19 08:24:46+00:00
2020-06-19 08:33:43+00:00
2020-06-19 08:33:41+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/287.diff", "html_url": "https://github.com/huggingface/datasets/pull/287", "merged_at": "2020-06-19T08:33:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/287.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/287" }
Fix #280 The imports were wrong
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/287/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/287/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/286/comments
https://api.github.com/repos/huggingface/datasets/issues/286/events
https://github.com/huggingface/datasets/pull/286
641,585,758
MDExOlB1bGxSZXF1ZXN0NDM2NzkzMjI4
286
Add ANLI dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/11016329?v=4", "events_url": "https://api.github.com/users/easonnie/events{/privacy}", "followers_url": "https://api.github.com/users/easonnie/followers", "following_url": "https://api.github.com/users/easonnie/following{/other_user}", "gists_url": "https://api.github.com/users/easonnie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/easonnie", "id": 11016329, "login": "easonnie", "node_id": "MDQ6VXNlcjExMDE2MzI5", "organizations_url": "https://api.github.com/users/easonnie/orgs", "received_events_url": "https://api.github.com/users/easonnie/received_events", "repos_url": "https://api.github.com/users/easonnie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/easonnie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/easonnie/subscriptions", "type": "User", "url": "https://api.github.com/users/easonnie", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-18 22:27:30+00:00
2020-06-22 12:23:27+00:00
2020-06-22 12:23:27+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/286.diff", "html_url": "https://github.com/huggingface/datasets/pull/286", "merged_at": "2020-06-22T12:23:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/286.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/286" }
I completed all the steps in https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset and push the code for ANLI. Please let me know if there are any errors.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/286/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/286/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/285/comments
https://api.github.com/repos/huggingface/datasets/issues/285/events
https://github.com/huggingface/datasets/pull/285
641,360,702
MDExOlB1bGxSZXF1ZXN0NDM2NjAyMjk4
285
Consistent formatting of citations
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-18 16:25:23+00:00
2020-06-22 08:09:25+00:00
2020-06-22 08:09:24+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/285.diff", "html_url": "https://github.com/huggingface/datasets/pull/285", "merged_at": "2020-06-22T08:09:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/285.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/285" }
#283
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/285/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/285/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/284/comments
https://api.github.com/repos/huggingface/datasets/issues/284/events
https://github.com/huggingface/datasets/pull/284
641,337,217
MDExOlB1bGxSZXF1ZXN0NDM2NTgxODQ2
284
Fix manual download instructions
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[]
closed
false
null
[]
null
5
2020-06-18 15:59:57+00:00
2020-06-19 08:24:21+00:00
2020-06-19 08:24:19+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/284.diff", "html_url": "https://github.com/huggingface/datasets/pull/284", "merged_at": "2020-06-19T08:24:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/284.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/284" }
This PR replaces the static `DatasetBulider` variable `MANUAL_DOWNLOAD_INSTRUCTIONS` by a property function `manual_download_instructions()`. Some datasets like XTREME and all WMT need the manual data dir only for a small fraction of the possible configs. After some brainstorming with @mariamabarham and @lhoestq, we came to the conclusion that having a property function `manual_download_instructions()` gives us more flexibility to decide on a per config basis in the dataset builder if manual download instructions are needed. Also this PR should unblock solves a bug with `wmt16 - ro-en` @sshleifer from this branch you should be able to succesfully run ```python import nlp ds = nlp.load_dataset('./datasets/wmt16', 'ro-en') ``` and once this PR is merged S3 should be synched so that ```python import nlp ds = nlp.load_dataset("wmt16", "ro-en") ``` works as well. **Important**: Since `MANUAL_DOWNLOAD_INSTRUCTIONS` was not really exposed to the user, this PR should not be a problem regarding backward compatibility.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/284/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/284/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/283/comments
https://api.github.com/repos/huggingface/datasets/issues/283/events
https://github.com/huggingface/datasets/issues/283
641,270,439
MDU6SXNzdWU2NDEyNzA0Mzk=
283
Consistent formatting of citations
{ "avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4", "events_url": "https://api.github.com/users/srush/events{/privacy}", "followers_url": "https://api.github.com/users/srush/followers", "following_url": "https://api.github.com/users/srush/following{/other_user}", "gists_url": "https://api.github.com/users/srush/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/srush", "id": 35882, "login": "srush", "node_id": "MDQ6VXNlcjM1ODgy", "organizations_url": "https://api.github.com/users/srush/orgs", "received_events_url": "https://api.github.com/users/srush/received_events", "repos_url": "https://api.github.com/users/srush/repos", "site_admin": false, "starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srush/subscriptions", "type": "User", "url": "https://api.github.com/users/srush", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" } ]
null
0
2020-06-18 14:48:45+00:00
2020-06-22 17:30:46+00:00
2020-06-22 17:30:46+00:00
CONTRIBUTOR
null
null
null
null
The citations are all of a different format, some have "```" and have text inside, others are proper bibtex. Can we make it so that they all are proper citations, i.e. parse by the bibtex spec: https://bibtexparser.readthedocs.io/en/master/
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/283/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/283/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/282/comments
https://api.github.com/repos/huggingface/datasets/issues/282/events
https://github.com/huggingface/datasets/pull/282
641,217,759
MDExOlB1bGxSZXF1ZXN0NDM2NDgxNzMy
282
Update dataset_info from gcs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-18 13:41:15+00:00
2020-06-18 16:24:52+00:00
2020-06-18 16:24:51+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/282.diff", "html_url": "https://github.com/huggingface/datasets/pull/282", "merged_at": "2020-06-18T16:24:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/282.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/282" }
Some datasets are hosted on gcs (wikipedia for example). In this PR I make sure that, when a user loads such datasets, the file_instructions are built using the dataset_info.json from gcs and not from the info extracted from the local `dataset_infos.json` (the one that contain the info for each config). Indeed local files may end up outdated. Furthermore, to avoid outdated dataset_infos.json, I now make sure that each time you run `load_dataset` it also tries to update the file locally.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/282/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/282/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/281/comments
https://api.github.com/repos/huggingface/datasets/issues/281/events
https://github.com/huggingface/datasets/issues/281
641,067,856
MDU6SXNzdWU2NDEwNjc4NTY=
281
Private/sensitive data
{ "avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4", "events_url": "https://api.github.com/users/MFreidank/events{/privacy}", "followers_url": "https://api.github.com/users/MFreidank/followers", "following_url": "https://api.github.com/users/MFreidank/following{/other_user}", "gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MFreidank", "id": 6368040, "login": "MFreidank", "node_id": "MDQ6VXNlcjYzNjgwNDA=", "organizations_url": "https://api.github.com/users/MFreidank/orgs", "received_events_url": "https://api.github.com/users/MFreidank/received_events", "repos_url": "https://api.github.com/users/MFreidank/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions", "type": "User", "url": "https://api.github.com/users/MFreidank", "user_view_type": "public" }
[]
closed
false
null
[]
null
3
2020-06-18 09:47:27+00:00
2020-06-20 13:15:12+00:00
2020-06-20 13:15:12+00:00
CONTRIBUTOR
null
null
null
null
Hi all, Thanks for this fantastic library, it makes it very easy to do prototyping for NLP projects interchangeably between TF/Pytorch. Unfortunately, there is data that cannot easily be shared publicly as it may contain sensitive information. Is there support/a plan to support such data with NLP, e.g. by reading it from local sources? Use case flow could look like this: use NLP to prototype an approach on similar, public data and apply the resulting prototype on sensitive/private data without the need to rethink data processing pipelines. Many thanks for your responses ahead of time and kind regards, MFreidank
{ "avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4", "events_url": "https://api.github.com/users/MFreidank/events{/privacy}", "followers_url": "https://api.github.com/users/MFreidank/followers", "following_url": "https://api.github.com/users/MFreidank/following{/other_user}", "gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MFreidank", "id": 6368040, "login": "MFreidank", "node_id": "MDQ6VXNlcjYzNjgwNDA=", "organizations_url": "https://api.github.com/users/MFreidank/orgs", "received_events_url": "https://api.github.com/users/MFreidank/received_events", "repos_url": "https://api.github.com/users/MFreidank/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions", "type": "User", "url": "https://api.github.com/users/MFreidank", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/281/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/281/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/280/comments
https://api.github.com/repos/huggingface/datasets/issues/280/events
https://github.com/huggingface/datasets/issues/280
640,677,615
MDU6SXNzdWU2NDA2Nzc2MTU=
280
Error with SquadV2 Metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/32203792?v=4", "events_url": "https://api.github.com/users/avinregmi/events{/privacy}", "followers_url": "https://api.github.com/users/avinregmi/followers", "following_url": "https://api.github.com/users/avinregmi/following{/other_user}", "gists_url": "https://api.github.com/users/avinregmi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/avinregmi", "id": 32203792, "login": "avinregmi", "node_id": "MDQ6VXNlcjMyMjAzNzky", "organizations_url": "https://api.github.com/users/avinregmi/orgs", "received_events_url": "https://api.github.com/users/avinregmi/received_events", "repos_url": "https://api.github.com/users/avinregmi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/avinregmi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avinregmi/subscriptions", "type": "User", "url": "https://api.github.com/users/avinregmi", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-17 19:10:54+00:00
2020-06-19 08:33:41+00:00
2020-06-19 08:33:41+00:00
NONE
null
null
null
null
I can't seem to import squad v2 metrics. **squad_metric = nlp.load_metric('squad_v2')** **This throws me an error.:** ``` ImportError Traceback (most recent call last) <ipython-input-8-170b6a170555> in <module> ----> 1 squad_metric = nlp.load_metric('squad_v2') ~/env/lib64/python3.6/site-packages/nlp/load.py in load_metric(path, name, process_id, num_process, data_dir, experiment_id, in_memory, download_config, **metric_init_kwargs) 426 """ 427 module_path = prepare_module(path, download_config=download_config, dataset=False) --> 428 metric_cls = import_main_class(module_path, dataset=False) 429 metric = metric_cls( 430 name=name, ~/env/lib64/python3.6/site-packages/nlp/load.py in import_main_class(module_path, dataset) 55 """ 56 importlib.invalidate_caches() ---> 57 module = importlib.import_module(module_path) 58 59 if dataset: /usr/lib64/python3.6/importlib/__init__.py in import_module(name, package) 124 break 125 level += 1 --> 126 return _bootstrap._gcd_import(name[level:], package, level) 127 128 /usr/lib64/python3.6/importlib/_bootstrap.py in _gcd_import(name, package, level) /usr/lib64/python3.6/importlib/_bootstrap.py in _find_and_load(name, import_) /usr/lib64/python3.6/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) /usr/lib64/python3.6/importlib/_bootstrap.py in _load_unlocked(spec) /usr/lib64/python3.6/importlib/_bootstrap_external.py in exec_module(self, module) /usr/lib64/python3.6/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) ~/env/lib64/python3.6/site-packages/nlp/metrics/squad_v2/a15e787c76889174874386d3def75321f0284c11730d2a57e28fe1352c9b5c7a/squad_v2.py in <module> 16 17 import nlp ---> 18 from .evaluate import evaluate 19 20 _CITATION = """\ ImportError: cannot import name 'evaluate' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/280/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/280/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/279/comments
https://api.github.com/repos/huggingface/datasets/issues/279/events
https://github.com/huggingface/datasets/issues/279
640,611,692
MDU6SXNzdWU2NDA2MTE2OTI=
279
Dataset Preprocessing Cache with .map() function not working as expected
{ "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sarahwie", "id": 8027676, "login": "sarahwie", "node_id": "MDQ6VXNlcjgwMjc2NzY=", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "repos_url": "https://api.github.com/users/sarahwie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "type": "User", "url": "https://api.github.com/users/sarahwie", "user_view_type": "public" }
[]
closed
false
null
[]
null
5
2020-06-17 17:17:21+00:00
2021-07-06 21:43:28+00:00
2021-04-18 23:43:49+00:00
NONE
null
null
null
null
I've been having issues with reproducibility when loading and processing datasets with the `.map` function. I was only able to resolve them by clearing all of the cache files on my system. Is there a way to disable using the cache when processing a dataset? As I make minor processing changes on the same dataset, I want to be able to be certain the data is being re-processed rather than loaded from a cached file. Could you also help me understand a bit more about how the caching functionality is used for pre-processing? E.g. how is it determined when to load from a cache vs. reprocess. I was particularly having an issue where the correct dataset splits were loaded, but as soon as I applied the `.map()` function to each split independently, they somehow all exited this process having been converted to the test set. Thanks!
{ "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sarahwie", "id": 8027676, "login": "sarahwie", "node_id": "MDQ6VXNlcjgwMjc2NzY=", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "repos_url": "https://api.github.com/users/sarahwie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "type": "User", "url": "https://api.github.com/users/sarahwie", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/279/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/279/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/278/comments
https://api.github.com/repos/huggingface/datasets/issues/278/events
https://github.com/huggingface/datasets/issues/278
640,518,917
MDU6SXNzdWU2NDA1MTg5MTc=
278
MemoryError when loading German Wikipedia
{ "avatar_url": "https://avatars.githubusercontent.com/u/4698028?v=4", "events_url": "https://api.github.com/users/gregburman/events{/privacy}", "followers_url": "https://api.github.com/users/gregburman/followers", "following_url": "https://api.github.com/users/gregburman/following{/other_user}", "gists_url": "https://api.github.com/users/gregburman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gregburman", "id": 4698028, "login": "gregburman", "node_id": "MDQ6VXNlcjQ2OTgwMjg=", "organizations_url": "https://api.github.com/users/gregburman/orgs", "received_events_url": "https://api.github.com/users/gregburman/received_events", "repos_url": "https://api.github.com/users/gregburman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gregburman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gregburman/subscriptions", "type": "User", "url": "https://api.github.com/users/gregburman", "user_view_type": "public" }
[]
closed
false
null
[]
null
7
2020-06-17 15:06:21+00:00
2020-06-19 12:53:02+00:00
2020-06-19 12:53:02+00:00
NONE
null
null
null
null
Hi, first off let me say thank you for all the awesome work you're doing at Hugging Face across all your projects (NLP, Transformers, Tokenizers) - they're all amazing contributions to us working with NLP models :) I'm trying to download the German Wikipedia dataset as follows: ``` wiki = nlp.load_dataset("wikipedia", "20200501.de", split="train") ``` However, when I do so, I get the following error: ``` Downloading and preparing dataset wikipedia/20200501.de (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/ubuntu/.cache/huggingface/datasets/wikipedia/20200501.de/1.0.0... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/load.py", line 520, in load_dataset save_infos=save_infos, File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/builder.py", line 433, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/builder.py", line 824, in _download_and_prepare "\n\t`{}`".format(usage_example) nlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20200501.de', beam_runner='DirectRunner')` ``` So, following on from the example usage at the bottom, I tried specifying `beam_runner='DirectRunner`, however when I do this after about 20 min after the data has all downloaded, I get a `MemoryError` as warned. This isn't an issue for the English or French Wikipedia datasets (I've tried both), as neither seem to require that `beam_runner` be specified. Can you please clarify why this is an issue for the German dataset? My nlp version is 0.2.1. Thank you!
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/278/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/278/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/277/comments
https://api.github.com/repos/huggingface/datasets/issues/277/events
https://github.com/huggingface/datasets/issues/277
640,163,053
MDU6SXNzdWU2NDAxNjMwNTM=
277
Empty samples in glue/qqp
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2020-06-17 05:54:52+00:00
2020-06-21 00:21:45+00:00
2020-06-21 00:21:45+00:00
CONTRIBUTOR
null
null
null
null
``` qqp = nlp.load_dataset('glue', 'qqp') print(qqp['train'][310121]) print(qqp['train'][362225]) ``` ``` {'question1': 'How can I create an Android app?', 'question2': '', 'label': 0, 'idx': 310137} {'question1': 'How can I develop android app?', 'question2': '', 'label': 0, 'idx': 362246} ``` Notice that question 2 is empty string. BTW, I have checked and these two are the only naughty ones in all splits of qqp.
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/277/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/277/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/276/comments
https://api.github.com/repos/huggingface/datasets/issues/276/events
https://github.com/huggingface/datasets/pull/276
639,490,858
MDExOlB1bGxSZXF1ZXN0NDM1MDY5Nzg5
276
Fix metric compute (original_instructions missing)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2020-06-16 08:52:01+00:00
2020-06-18 07:41:45+00:00
2020-06-18 07:41:44+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/276.diff", "html_url": "https://github.com/huggingface/datasets/pull/276", "merged_at": "2020-06-18T07:41:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/276.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/276" }
When loading arrow data we added in cc8d250 a way to specify the instructions that were used to store them with the loaded dataset. However metrics load data the same way but don't need instructions (we use one single file). In this PR I just make `original_instructions` optional when reading files to load a `Dataset` object. This should fix #269
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/276/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/276/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/275/comments
https://api.github.com/repos/huggingface/datasets/issues/275/events
https://github.com/huggingface/datasets/issues/275
639,439,052
MDU6SXNzdWU2Mzk0MzkwNTI=
275
NonMatchingChecksumError when loading pubmed dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/48441753?v=4", "events_url": "https://api.github.com/users/DavideStenner/events{/privacy}", "followers_url": "https://api.github.com/users/DavideStenner/followers", "following_url": "https://api.github.com/users/DavideStenner/following{/other_user}", "gists_url": "https://api.github.com/users/DavideStenner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DavideStenner", "id": 48441753, "login": "DavideStenner", "node_id": "MDQ6VXNlcjQ4NDQxNzUz", "organizations_url": "https://api.github.com/users/DavideStenner/orgs", "received_events_url": "https://api.github.com/users/DavideStenner/received_events", "repos_url": "https://api.github.com/users/DavideStenner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DavideStenner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DavideStenner/subscriptions", "type": "User", "url": "https://api.github.com/users/DavideStenner", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
1
2020-06-16 07:31:51+00:00
2020-06-19 07:37:07+00:00
2020-06-19 07:37:07+00:00
NONE
null
null
null
null
I get this error when i run `nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]')`. The error is: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-2-7742dea167d0> in <module>() ----> 1 df = nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]') 2 df = pd.DataFrame(df) 3 gc.collect() 3 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 431 verify_infos = not save_infos and not ignore_verifications 432 self._download_and_prepare( --> 433 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 434 ) 435 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 468 # Checksums verification 469 if verify_infos: --> 470 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums()) 471 for split_generator in split_generators: 472 if str(split_generator.split_info.name).lower() == "all": /usr/local/lib/python3.6/dist-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums) 34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]] 35 if len(bad_urls) > 0: ---> 36 raise NonMatchingChecksumError(str(bad_urls)) 37 logger.info("All the checksums matched successfully.") 38 NonMatchingChecksumError: ['https://drive.google.com/uc?id=1b3rmCSIoh6VhD4HKWjI4HOW-cSwcwbeC&export=download', 'https://drive.google.com/uc?id=1lvsqvsFi3W-pE1SqNZI0s8NR9rC1tsja&export=download'] ``` I'm currently working on google colab. That is quite strange because yesterday it was fine.
{ "avatar_url": "https://avatars.githubusercontent.com/u/48441753?v=4", "events_url": "https://api.github.com/users/DavideStenner/events{/privacy}", "followers_url": "https://api.github.com/users/DavideStenner/followers", "following_url": "https://api.github.com/users/DavideStenner/following{/other_user}", "gists_url": "https://api.github.com/users/DavideStenner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DavideStenner", "id": 48441753, "login": "DavideStenner", "node_id": "MDQ6VXNlcjQ4NDQxNzUz", "organizations_url": "https://api.github.com/users/DavideStenner/orgs", "received_events_url": "https://api.github.com/users/DavideStenner/received_events", "repos_url": "https://api.github.com/users/DavideStenner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DavideStenner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DavideStenner/subscriptions", "type": "User", "url": "https://api.github.com/users/DavideStenner", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/275/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/275/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/274/comments
https://api.github.com/repos/huggingface/datasets/issues/274/events
https://github.com/huggingface/datasets/issues/274
639,156,625
MDU6SXNzdWU2MzkxNTY2MjU=
274
PG-19
{ "avatar_url": "https://avatars.githubusercontent.com/u/108653?v=4", "events_url": "https://api.github.com/users/lucidrains/events{/privacy}", "followers_url": "https://api.github.com/users/lucidrains/followers", "following_url": "https://api.github.com/users/lucidrains/following{/other_user}", "gists_url": "https://api.github.com/users/lucidrains/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucidrains", "id": 108653, "login": "lucidrains", "node_id": "MDQ6VXNlcjEwODY1Mw==", "organizations_url": "https://api.github.com/users/lucidrains/orgs", "received_events_url": "https://api.github.com/users/lucidrains/received_events", "repos_url": "https://api.github.com/users/lucidrains/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucidrains/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucidrains/subscriptions", "type": "User", "url": "https://api.github.com/users/lucidrains", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
4
2020-06-15 21:02:26+00:00
2020-07-06 15:35:02+00:00
2020-07-06 15:35:02+00:00
CONTRIBUTOR
null
null
null
null
Hi, and thanks for all your open-sourced work, as always! I was wondering if you would be open to adding PG-19 to your collection of datasets. https://github.com/deepmind/pg19 It is often used for benchmarking long-range language modeling.
{ "avatar_url": "https://avatars.githubusercontent.com/u/108653?v=4", "events_url": "https://api.github.com/users/lucidrains/events{/privacy}", "followers_url": "https://api.github.com/users/lucidrains/followers", "following_url": "https://api.github.com/users/lucidrains/following{/other_user}", "gists_url": "https://api.github.com/users/lucidrains/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucidrains", "id": 108653, "login": "lucidrains", "node_id": "MDQ6VXNlcjEwODY1Mw==", "organizations_url": "https://api.github.com/users/lucidrains/orgs", "received_events_url": "https://api.github.com/users/lucidrains/received_events", "repos_url": "https://api.github.com/users/lucidrains/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucidrains/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucidrains/subscriptions", "type": "User", "url": "https://api.github.com/users/lucidrains", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/274/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/274/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/273/comments
https://api.github.com/repos/huggingface/datasets/issues/273/events
https://github.com/huggingface/datasets/pull/273
638,968,054
MDExOlB1bGxSZXF1ZXN0NDM0NjM0MzU4
273
update cos_e to add cos_e v1.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-15 16:03:22+00:00
2020-06-16 08:25:54+00:00
2020-06-16 08:25:52+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/273.diff", "html_url": "https://github.com/huggingface/datasets/pull/273", "merged_at": "2020-06-16T08:25:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/273.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/273" }
This PR updates the cos_e dataset to add v1.0 as requested here #163 @nazneenrajani
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/273/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/273/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/272/comments
https://api.github.com/repos/huggingface/datasets/issues/272/events
https://github.com/huggingface/datasets/pull/272
638,307,313
MDExOlB1bGxSZXF1ZXN0NDM0MTExOTQ3
272
asd
{ "avatar_url": "https://avatars.githubusercontent.com/u/66900970?v=4", "events_url": "https://api.github.com/users/sn696/events{/privacy}", "followers_url": "https://api.github.com/users/sn696/followers", "following_url": "https://api.github.com/users/sn696/following{/other_user}", "gists_url": "https://api.github.com/users/sn696/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sn696", "id": 66900970, "login": "sn696", "node_id": "MDQ6VXNlcjY2OTAwOTcw", "organizations_url": "https://api.github.com/users/sn696/orgs", "received_events_url": "https://api.github.com/users/sn696/received_events", "repos_url": "https://api.github.com/users/sn696/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sn696/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sn696/subscriptions", "type": "User", "url": "https://api.github.com/users/sn696", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-14 08:20:38+00:00
2020-06-14 09:16:41+00:00
2020-06-14 09:16:41+00:00
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/272.diff", "html_url": "https://github.com/huggingface/datasets/pull/272", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/272.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/272" }
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/272/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/272/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/271/comments
https://api.github.com/repos/huggingface/datasets/issues/271/events
https://github.com/huggingface/datasets/pull/271
638,135,754
MDExOlB1bGxSZXF1ZXN0NDMzOTg3NDkw
271
Fix allociné dataset configuration
{ "avatar_url": "https://avatars.githubusercontent.com/u/37028092?v=4", "events_url": "https://api.github.com/users/TheophileBlard/events{/privacy}", "followers_url": "https://api.github.com/users/TheophileBlard/followers", "following_url": "https://api.github.com/users/TheophileBlard/following{/other_user}", "gists_url": "https://api.github.com/users/TheophileBlard/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TheophileBlard", "id": 37028092, "login": "TheophileBlard", "node_id": "MDQ6VXNlcjM3MDI4MDky", "organizations_url": "https://api.github.com/users/TheophileBlard/orgs", "received_events_url": "https://api.github.com/users/TheophileBlard/received_events", "repos_url": "https://api.github.com/users/TheophileBlard/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TheophileBlard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TheophileBlard/subscriptions", "type": "User", "url": "https://api.github.com/users/TheophileBlard", "user_view_type": "public" }
[]
closed
false
null
[]
null
6
2020-06-13 10:12:10+00:00
2020-06-18 07:41:21+00:00
2020-06-18 07:41:20+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/271.diff", "html_url": "https://github.com/huggingface/datasets/pull/271", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/271.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/271" }
This is a patch for #244. According to the [live nlp viewer](url), the Allociné dataset must be loaded with : ```python dataset = load_dataset('allocine', 'allocine') ``` This is redundant, as there is only one "dataset configuration", and should only be: ```python dataset = load_dataset('allocine') ``` This is my mistake, because the code for [`allocine.py`](https://github.com/huggingface/nlp/blob/master/datasets/allocine/allocine.py) was inspired by [`imdb.py`](https://github.com/huggingface/nlp/blob/master/datasets/imdb/imdb.py), which also force the user to specify the "dataset configuration" (even if there is only one). I believe this PR should solve this issue, making the Allociné dataset more convenient to use.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/271/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/271/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/270/comments
https://api.github.com/repos/huggingface/datasets/issues/270/events
https://github.com/huggingface/datasets/issues/270
638,121,617
MDU6SXNzdWU2MzgxMjE2MTc=
270
c4 dataset is not viewable in nlpviewer demo
{ "avatar_url": "https://avatars.githubusercontent.com/u/6441313?v=4", "events_url": "https://api.github.com/users/rajarsheem/events{/privacy}", "followers_url": "https://api.github.com/users/rajarsheem/followers", "following_url": "https://api.github.com/users/rajarsheem/following{/other_user}", "gists_url": "https://api.github.com/users/rajarsheem/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rajarsheem", "id": 6441313, "login": "rajarsheem", "node_id": "MDQ6VXNlcjY0NDEzMTM=", "organizations_url": "https://api.github.com/users/rajarsheem/orgs", "received_events_url": "https://api.github.com/users/rajarsheem/received_events", "repos_url": "https://api.github.com/users/rajarsheem/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rajarsheem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajarsheem/subscriptions", "type": "User", "url": "https://api.github.com/users/rajarsheem", "user_view_type": "public" }
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
[]
null
1
2020-06-13 08:26:16+00:00
2020-10-27 15:35:29+00:00
2020-10-27 15:35:13+00:00
NONE
null
null
null
null
I get the following error when I try to view the c4 dataset in [nlpviewer](https://huggingface.co/nlp/viewer/) ```python ModuleNotFoundError: No module named 'langdetect' Traceback: File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp_viewer/run.py", line 54, in <module> configs = get_confs(option.id) File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 591, in wrapped_func return get_or_create_cached_value() File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 575, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp_viewer/run.py", line 48, in get_confs builder_cls = nlp.load.import_main_class(module_path, dataset=True) File "/home/sasha/.local/lib/python3.7/site-packages/nlp/load.py", line 57, in import_main_class module = importlib.import_module(module_path) File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4.py", line 29, in <module> from .c4_utils import ( File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4_utils.py", line 29, in <module> import langdetect ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/270/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/270/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/269/comments
https://api.github.com/repos/huggingface/datasets/issues/269/events
https://github.com/huggingface/datasets/issues/269
638,106,774
MDU6SXNzdWU2MzgxMDY3NzQ=
269
Error in metric.compute: missing `original_instructions` argument
{ "avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4", "events_url": "https://api.github.com/users/zphang/events{/privacy}", "followers_url": "https://api.github.com/users/zphang/followers", "following_url": "https://api.github.com/users/zphang/following{/other_user}", "gists_url": "https://api.github.com/users/zphang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zphang", "id": 1668462, "login": "zphang", "node_id": "MDQ6VXNlcjE2Njg0NjI=", "organizations_url": "https://api.github.com/users/zphang/orgs", "received_events_url": "https://api.github.com/users/zphang/received_events", "repos_url": "https://api.github.com/users/zphang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zphang/subscriptions", "type": "User", "url": "https://api.github.com/users/zphang", "user_view_type": "public" }
[ { "color": "25b21e", "default": false, "description": "A bug in a metric script", "id": 2067393914, "name": "metric bug", "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
null
0
2020-06-13 06:26:54+00:00
2020-06-18 07:41:44+00:00
2020-06-18 07:41:44+00:00
NONE
null
null
null
null
I'm running into an error using metrics for computation in the latest master as well as version 0.2.1. Here is a minimal example: ```python import nlp rte_metric = nlp.load_metric('glue', name="rte") rte_metric.compute( [0, 0, 1, 1], [0, 1, 0, 1], ) ``` ``` 181 # Read the predictions and references 182 reader = ArrowReader(path=self.data_dir, info=None) --> 183 self.data = reader.read_files(node_files) 184 185 # Release all of our locks TypeError: read_files() missing 1 required positional argument: 'original_instructions' ``` I believe this might have been introduced with cc8d2508b75f7ba0e5438d0686ee02dcec43c7f4, which added the `original_instructions` argument. Elsewhere, an empty-string default is provided--perhaps that could be done here too?
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/269/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/269/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/268/comments
https://api.github.com/repos/huggingface/datasets/issues/268/events
https://github.com/huggingface/datasets/pull/268
637,848,056
MDExOlB1bGxSZXF1ZXN0NDMzNzU5NzQ1
268
add Rotten Tomatoes Movie Review sentences sentiment dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-12 15:53:59+00:00
2020-06-18 07:46:24+00:00
2020-06-18 07:46:23+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/268.diff", "html_url": "https://github.com/huggingface/datasets/pull/268", "merged_at": "2020-06-18T07:46:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/268.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/268" }
Sentence-level movie reviews v1.0 from here: http://www.cs.cornell.edu/people/pabo/movie-review-data/
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/268/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/268/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/267/comments
https://api.github.com/repos/huggingface/datasets/issues/267/events
https://github.com/huggingface/datasets/issues/267
637,415,545
MDU6SXNzdWU2Mzc0MTU1NDU=
267
How can I load/find WMT en-romanian?
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" } ]
null
1
2020-06-12 01:09:37+00:00
2020-06-19 08:24:19+00:00
2020-06-19 08:24:19+00:00
CONTRIBUTOR
null
null
null
null
I believe it is from `wmt16` When I run ```python wmt = nlp.load_dataset('wmt16') ``` I get: ```python AssertionError: The dataset wmt16 with config cs-en requires manual data. Please follow the manual download instructions: Some of the wmt configs here, require a manual download. Please look into wmt.py to see the exact path (and file name) that has to be downloaded. . Manual data can be loaded with `nlp.load(wmt16, data_dir='<path/to/manual/data>') ``` There is no wmt.py,as the error message suggests, and wmt16.py doesn't have manual download instructions. Any idea how to do this? Thanks in advance!
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/267/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/267/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/266/comments
https://api.github.com/repos/huggingface/datasets/issues/266/events
https://github.com/huggingface/datasets/pull/266
637,156,392
MDExOlB1bGxSZXF1ZXN0NDMzMTk1NDgw
266
Add sort, shuffle, test_train_split and select methods
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf", "user_view_type": "public" }
[]
closed
false
null
[]
null
4
2020-06-11 16:22:20+00:00
2020-06-18 16:23:25+00:00
2020-06-18 16:23:24+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/266.diff", "html_url": "https://github.com/huggingface/datasets/pull/266", "merged_at": "2020-06-18T16:23:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/266.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/266" }
Add a bunch of methods to reorder/split/select rows in a dataset: - `dataset.select(indices)`: Create a new dataset with rows selected following the list/array of indices (which can have a different size than the dataset and contain duplicated indices, the only constrain is that all the integers in the list must be smaller than the dataset size, otherwise we're indexing outside the dataset...) - `dataset.sort(column_name)`: sort a dataset according to a column (has to be a column with a numpy compatible type) - `dataset.shuffle(seed)`: shuffle a dataset rows - `dataset.train_test_split(test_size, train_size)`: Return a dictionary with two random train and test subsets (`train` and `test` ``Dataset`` splits) All these methods are **not** in-place which means they return new ``Dataset``. This is the default behavior in the library. Fix #147 #166 #259
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/266/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/266/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/265/comments
https://api.github.com/repos/huggingface/datasets/issues/265/events
https://github.com/huggingface/datasets/pull/265
637,139,220
MDExOlB1bGxSZXF1ZXN0NDMzMTgxNDMz
265
Add pyarrow warning colab
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-11 15:57:51+00:00
2020-08-02 18:14:36+00:00
2020-06-12 08:14:16+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/265.diff", "html_url": "https://github.com/huggingface/datasets/pull/265", "merged_at": "2020-06-12T08:14:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/265.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/265" }
When a user installs `nlp` on google colab, then google colab doesn't update pyarrow, and the runtime needs to be restarted to use the updated version of pyarrow. This is an issue because `nlp` requires the updated version to work correctly. In this PR I added en error that is shown to the user in google colab if the user tries to `import nlp` without having restarted the runtime. The error tells the user to restart the runtime.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/265/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/265/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/264/comments
https://api.github.com/repos/huggingface/datasets/issues/264/events
https://github.com/huggingface/datasets/pull/264
637,106,170
MDExOlB1bGxSZXF1ZXN0NDMzMTU0ODQ4
264
Fix small issues creating dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-11 15:20:16+00:00
2020-06-12 08:15:57+00:00
2020-06-12 08:15:56+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/264.diff", "html_url": "https://github.com/huggingface/datasets/pull/264", "merged_at": "2020-06-12T08:15:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/264.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/264" }
Fix many small issues mentioned in #249: - don't force to install apache beam for commands - fix None cache dir when using `dl_manager.download_custom` - added new extras in `setup.py` named `dev` that contains tests and quality dependencies - mock dataset sizes when running tests with dummy data - add a note about the naming convention of datasets (camel case - snake case) in CONTRIBUTING.md This should help users create their datasets. Next step is the `add_dataset.md` docs :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/264/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/264/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/263/comments
https://api.github.com/repos/huggingface/datasets/issues/263/events
https://github.com/huggingface/datasets/issues/263
637,028,015
MDU6SXNzdWU2MzcwMjgwMTU=
263
[Feature request] Support for external modality for language datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4", "events_url": "https://api.github.com/users/aleSuglia/events{/privacy}", "followers_url": "https://api.github.com/users/aleSuglia/followers", "following_url": "https://api.github.com/users/aleSuglia/following{/other_user}", "gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aleSuglia", "id": 1479733, "login": "aleSuglia", "node_id": "MDQ6VXNlcjE0Nzk3MzM=", "organizations_url": "https://api.github.com/users/aleSuglia/orgs", "received_events_url": "https://api.github.com/users/aleSuglia/received_events", "repos_url": "https://api.github.com/users/aleSuglia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions", "type": "User", "url": "https://api.github.com/users/aleSuglia", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
closed
false
null
[]
null
5
2020-06-11 13:42:18+00:00
2022-02-10 13:26:35+00:00
2022-02-10 13:26:35+00:00
CONTRIBUTOR
null
null
null
null
# Background In recent years many researchers have advocated that learning meanings from text-based only datasets is just like asking a human to "learn to speak by listening to the radio" [[E. Bender and A. Koller,2020](https://openreview.net/forum?id=GKTvAcb12b), [Y. Bisk et. al, 2020](https://arxiv.org/abs/2004.10151)]. Therefore, the importance of multi-modal datasets for the NLP community is of paramount importance for next-generation models. For this reason, I raised a [concern](https://github.com/huggingface/nlp/pull/236#issuecomment-639832029) related to the best way to integrate external features in NLP datasets (e.g., visual features associated with an image, audio features associated with a recording, etc.). This would be of great importance for a more systematic way of representing data for ML models that are learning from multi-modal data. # Language + Vision ## Use case Typically, people working on Language+Vision tasks, have a reference dataset (either in JSON or JSONL format) and for each example, they have an identifier that specifies the reference image. For a practical example, you can refer to the [GQA](https://cs.stanford.edu/people/dorarad/gqa/download.html#seconddown) dataset. Currently, images are represented by either pooling-based features (average pooling of ResNet or VGGNet features, see [DeVries et.al, 2017](https://arxiv.org/abs/1611.08481), [Shekhar et.al, 2019](https://www.aclweb.org/anthology/N19-1265.pdf)) where you have a single vector for every image. Another option is to use a set of feature maps for every image extracted from a specific layer of a CNN (see [Xu et.al, 2015](https://arxiv.org/abs/1502.03044)). A more recent option, especially with large-scale multi-modal transformers [Li et. al, 2019](https://arxiv.org/abs/1908.03557), is to use FastRCNN features. For all these types of features, people use one of the following formats: 1. [HD5F](https://pypi.org/project/h5py/) 2. [NumPy](https://numpy.org/doc/stable/reference/generated/numpy.savez.html) 3. [LMDB](https://lmdb.readthedocs.io/en/release/) ## Implementation considerations I was thinking about possible ways of implementing this feature. As mentioned above, depending on the model, different visual features can be used. This step usually relies on another model (say ResNet-101) that is used to generate the visual features for each image used in the dataset. Typically, this step is done in a separate script that completes the feature generation procedure. The usual processing steps for these datasets are the following: 1. Download dataset 2. Download images associated with the dataset 3. Write a script that generates the visual features for every image and store them in a specific file 4. Create a DataLoader that maps the visual features to the corresponding language example In my personal projects, I've decided to ignore HD5F because it doesn't have out-of-the-box support for multi-processing (see this PyTorch [issue](https://github.com/pytorch/pytorch/issues/11929)). I've been successfully using a NumPy compressed file for each image so that I can store any sort of information in it. For ease of use of all these Language+Vision datasets, it would be really handy to have a way to associate the visual features with the text and store them in an efficient way. That's why I immediately thought about the HuggingFace NLP backend based on Apache Arrow. The assumption here is that the external modality will be mapped to a N-dimensional tensor so easily represented by a NumPy array. Looking forward to hearing your thoughts about it!
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 18, "-1": 0, "confused": 0, "eyes": 4, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 23, "url": "https://api.github.com/repos/huggingface/datasets/issues/263/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/263/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/262/comments
https://api.github.com/repos/huggingface/datasets/issues/262/events
https://github.com/huggingface/datasets/pull/262
636,702,849
MDExOlB1bGxSZXF1ZXN0NDMyODI3Mzcz
262
Add new dataset ANLI Round 1
{ "avatar_url": "https://avatars.githubusercontent.com/u/11016329?v=4", "events_url": "https://api.github.com/users/easonnie/events{/privacy}", "followers_url": "https://api.github.com/users/easonnie/followers", "following_url": "https://api.github.com/users/easonnie/following{/other_user}", "gists_url": "https://api.github.com/users/easonnie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/easonnie", "id": 11016329, "login": "easonnie", "node_id": "MDQ6VXNlcjExMDE2MzI5", "organizations_url": "https://api.github.com/users/easonnie/orgs", "received_events_url": "https://api.github.com/users/easonnie/received_events", "repos_url": "https://api.github.com/users/easonnie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/easonnie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/easonnie/subscriptions", "type": "User", "url": "https://api.github.com/users/easonnie", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-11 04:14:57+00:00
2020-06-12 22:03:03+00:00
2020-06-12 22:03:03+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/262.diff", "html_url": "https://github.com/huggingface/datasets/pull/262", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/262.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/262" }
Adding new dataset [ANLI](https://github.com/facebookresearch/anli/). I'm not familiar with how to add new dataset. Let me know if there is any issue. I only include round 1 data here. There will be round 2, round 3 and more in the future with potentially different format. I think it will be better to separate them.
{ "avatar_url": "https://avatars.githubusercontent.com/u/11016329?v=4", "events_url": "https://api.github.com/users/easonnie/events{/privacy}", "followers_url": "https://api.github.com/users/easonnie/followers", "following_url": "https://api.github.com/users/easonnie/following{/other_user}", "gists_url": "https://api.github.com/users/easonnie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/easonnie", "id": 11016329, "login": "easonnie", "node_id": "MDQ6VXNlcjExMDE2MzI5", "organizations_url": "https://api.github.com/users/easonnie/orgs", "received_events_url": "https://api.github.com/users/easonnie/received_events", "repos_url": "https://api.github.com/users/easonnie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/easonnie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/easonnie/subscriptions", "type": "User", "url": "https://api.github.com/users/easonnie", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/262/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/262/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/261/comments
https://api.github.com/repos/huggingface/datasets/issues/261/events
https://github.com/huggingface/datasets/issues/261
636,372,380
MDU6SXNzdWU2MzYzNzIzODA=
261
Downloading dataset error with pyarrow.lib.RecordBatch
{ "avatar_url": "https://avatars.githubusercontent.com/u/5248968?v=4", "events_url": "https://api.github.com/users/cuent/events{/privacy}", "followers_url": "https://api.github.com/users/cuent/followers", "following_url": "https://api.github.com/users/cuent/following{/other_user}", "gists_url": "https://api.github.com/users/cuent/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cuent", "id": 5248968, "login": "cuent", "node_id": "MDQ6VXNlcjUyNDg5Njg=", "organizations_url": "https://api.github.com/users/cuent/orgs", "received_events_url": "https://api.github.com/users/cuent/received_events", "repos_url": "https://api.github.com/users/cuent/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cuent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cuent/subscriptions", "type": "User", "url": "https://api.github.com/users/cuent", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2020-06-10 16:04:19+00:00
2020-06-11 14:35:12+00:00
2020-06-11 14:35:12+00:00
NONE
null
null
null
null
I am trying to download `sentiment140` and I have the following error ``` /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 418 verify_infos = not save_infos and not ignore_verifications 419 self._download_and_prepare( --> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 421 ) 422 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 472 try: 473 # Prepare split will record examples associated to the split --> 474 self._prepare_split(split_generator, **prepare_split_kwargs) 475 except OSError: 476 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or "")) /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator) 652 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): 653 example = self.info.features.encode_example(record) --> 654 writer.write(example) 655 num_examples, num_bytes = writer.finalize() 656 /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write(self, example, writer_batch_size) 143 self._build_writer(pa_table=pa.Table.from_pydict(example)) 144 if writer_batch_size is not None and len(self.current_rows) >= writer_batch_size: --> 145 self.write_on_file() 146 147 def write_batch( /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_on_file(self) 127 else: 128 # All good --> 129 self._write_array_on_file(pa_array) 130 self.current_rows = [] 131 /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in _write_array_on_file(self, pa_array) 96 def _write_array_on_file(self, pa_array): 97 """Write a PyArrow Array""" ---> 98 pa_batch = pa.RecordBatch.from_struct_array(pa_array) 99 self._num_bytes += pa_array.nbytes 100 self.pa_writer.write_batch(pa_batch) AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array' ``` I installed the last version and ran the following command: ```python import nlp sentiment140 = nlp.load_dataset('sentiment140', cache_dir='/content') ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/5248968?v=4", "events_url": "https://api.github.com/users/cuent/events{/privacy}", "followers_url": "https://api.github.com/users/cuent/followers", "following_url": "https://api.github.com/users/cuent/following{/other_user}", "gists_url": "https://api.github.com/users/cuent/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cuent", "id": 5248968, "login": "cuent", "node_id": "MDQ6VXNlcjUyNDg5Njg=", "organizations_url": "https://api.github.com/users/cuent/orgs", "received_events_url": "https://api.github.com/users/cuent/received_events", "repos_url": "https://api.github.com/users/cuent/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cuent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cuent/subscriptions", "type": "User", "url": "https://api.github.com/users/cuent", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/261/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/261/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/260/comments
https://api.github.com/repos/huggingface/datasets/issues/260/events
https://github.com/huggingface/datasets/pull/260
636,261,118
MDExOlB1bGxSZXF1ZXN0NDMyNDY3NDM5
260
Consistency fixes
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-10 13:44:42+00:00
2020-06-11 10:34:37+00:00
2020-06-11 10:34:36+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/260.diff", "html_url": "https://github.com/huggingface/datasets/pull/260", "merged_at": "2020-06-11T10:34:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/260.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/260" }
A few bugs I've found while hacking
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/260/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/260/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/259/comments
https://api.github.com/repos/huggingface/datasets/issues/259/events
https://github.com/huggingface/datasets/issues/259
636,239,529
MDU6SXNzdWU2MzYyMzk1Mjk=
259
documentation missing how to split a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/2873355?v=4", "events_url": "https://api.github.com/users/fotisj/events{/privacy}", "followers_url": "https://api.github.com/users/fotisj/followers", "following_url": "https://api.github.com/users/fotisj/following{/other_user}", "gists_url": "https://api.github.com/users/fotisj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fotisj", "id": 2873355, "login": "fotisj", "node_id": "MDQ6VXNlcjI4NzMzNTU=", "organizations_url": "https://api.github.com/users/fotisj/orgs", "received_events_url": "https://api.github.com/users/fotisj/received_events", "repos_url": "https://api.github.com/users/fotisj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fotisj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fotisj/subscriptions", "type": "User", "url": "https://api.github.com/users/fotisj", "user_view_type": "public" }
[]
closed
false
null
[]
null
7
2020-06-10 13:18:13+00:00
2023-03-14 13:56:07+00:00
2020-06-18 22:20:24+00:00
NONE
null
null
null
null
I am trying to understand how to split a dataset ( as arrow_dataset). I know I can do something like this to access a split which is already in the original dataset : `ds_test = nlp.load_dataset('imdb, split='test') ` But how can I split ds_test into a test and a validation set (without reading the data into memory and keeping the arrow_dataset as container)? I guess it has something to do with the module split :-) but there is no real documentation in the code but only a reference to a longer description: > See the [guide on splits](https://github.com/huggingface/nlp/tree/master/docs/splits.md) for more information. But the guide seems to be missing. To clarify: I know that this has been modelled after the dataset of tensorflow and that some of the documentation there can be used [like this one](https://www.tensorflow.org/datasets/splits). But to come back to the example above: I cannot simply split the testset doing this: `ds_test = nlp.load_dataset('imdb, split='test'[:5000]) ` `ds_val = nlp.load_dataset('imdb, split='test'[5000:])` because the imdb test data is sorted by class (probably not a good idea anyway)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/259/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/259/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/258/comments
https://api.github.com/repos/huggingface/datasets/issues/258/events
https://github.com/huggingface/datasets/issues/258
635,859,525
MDU6SXNzdWU2MzU4NTk1MjU=
258
Why is dataset after tokenization far more larger than the orginal one ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang", "user_view_type": "public" }
[]
closed
false
null
[]
null
4
2020-06-10 01:27:07+00:00
2020-06-10 12:46:34+00:00
2020-06-10 12:46:34+00:00
CONTRIBUTOR
null
null
null
null
I tokenize wiki dataset by `map` and cache the results. ``` def tokenize_tfm(example): example['input_ids'] = hf_fast_tokenizer.convert_tokens_to_ids(hf_fast_tokenizer.tokenize(example['text'])) return example wiki = nlp.load_dataset('wikipedia', '20200501.en', cache_dir=cache_dir)['train'] wiki.map(tokenize_tfm, cache_file_name=cache_dir/"wikipedia/20200501.en/1.0.0/tokenized_wiki.arrow") ``` and when I see their size ``` ls -l --block-size=M 17460M wikipedia-train.arrow 47511M tokenized_wiki.arrow ``` The tokenized one is over 2x size of original one. Is there something I did wrong ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/258/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/258/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/257/comments
https://api.github.com/repos/huggingface/datasets/issues/257/events
https://github.com/huggingface/datasets/issues/257
635,620,979
MDU6SXNzdWU2MzU2MjA5Nzk=
257
Tokenizer pickling issue fix not landed in `nlp` yet?
{ "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sarahwie", "id": 8027676, "login": "sarahwie", "node_id": "MDQ6VXNlcjgwMjc2NzY=", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "repos_url": "https://api.github.com/users/sarahwie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "type": "User", "url": "https://api.github.com/users/sarahwie", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2020-06-09 17:12:34+00:00
2020-06-10 21:45:32+00:00
2020-06-09 17:26:53+00:00
NONE
null
null
null
null
Unless I recreate an arrow_dataset from my loaded nlp dataset myself (which I think does not use the cache by default), I get the following error when applying the map function: ``` dataset = nlp.load_dataset('cos_e') tokenizer = GPT2TokenizerFast.from_pretrained('gpt2', cache_dir=cache_dir) for split in dataset.keys(): dataset[split].map(lambda x: some_function(x, tokenizer)) ``` ``` 06/09/2020 10:09:19 - INFO - nlp.builder - Constructing Dataset for split train[:10], from /home/sarahw/.cache/huggingface/datasets/cos_e/default/0.0.1 Traceback (most recent call last): File "generation/input_to_label_and_rationale.py", line 390, in <module> main() File "generation/input_to_label_and_rationale.py", line 263, in main dataset[split] = dataset[split].map(lambda x: input_to_explanation_plus_label(x, tokenizer, max_length, datasource=data_args.task_name, wt5=(model_class=='t5'), expl_only=model_args.rationale_only), batched=False) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/arrow_dataset.py", line 522, in map cache_file_name = self._get_cache_file_path(function, cache_kwargs) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/arrow_dataset.py", line 381, in _get_cache_file_path function_bytes = dumps(function) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 257, in dumps dump(obj, file) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 250, in dump Pickler(file).dump(obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 445, in dump StockPickler.dump(self, obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 485, in dump self.save(obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 1410, in save_function pickler.save_reduce(_create_function, (obj.__code__, File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 690, in save_reduce save(args) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 899, in save_tuple save(element) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 899, in save_tuple save(element) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 1147, in save_cell pickler.save_reduce(_create_cell, (f,), obj=obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 690, in save_reduce save(args) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 884, in save_tuple save(element) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 601, in save self.save_reduce(obj=obj, *rv) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 715, in save_reduce save(state) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 912, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 969, in save_dict self._batch_setitems(obj.items()) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 995, in _batch_setitems save(v) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 601, in save self.save_reduce(obj=obj, *rv) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 715, in save_reduce save(state) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 912, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 969, in save_dict self._batch_setitems(obj.items()) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 995, in _batch_setitems save(v) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 576, in save rv = reduce(self.proto) TypeError: cannot pickle 'Tokenizer' object ``` Fix seems to be in the tokenizers [`0.8.0.dev1 pre-release`](https://github.com/huggingface/tokenizers/issues/87), which I can't install with any package managers.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sarahwie", "id": 8027676, "login": "sarahwie", "node_id": "MDQ6VXNlcjgwMjc2NzY=", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "repos_url": "https://api.github.com/users/sarahwie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "type": "User", "url": "https://api.github.com/users/sarahwie", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/257/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/257/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/256/comments
https://api.github.com/repos/huggingface/datasets/issues/256/events
https://github.com/huggingface/datasets/issues/256
635,596,295
MDU6SXNzdWU2MzU1OTYyOTU=
256
[Feature request] Add a feature to dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sarahwie", "id": 8027676, "login": "sarahwie", "node_id": "MDQ6VXNlcjgwMjc2NzY=", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "repos_url": "https://api.github.com/users/sarahwie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "type": "User", "url": "https://api.github.com/users/sarahwie", "user_view_type": "public" }
[]
closed
false
null
[]
null
5
2020-06-09 16:38:12+00:00
2020-06-09 16:51:42+00:00
2020-06-09 16:51:42+00:00
NONE
null
null
null
null
Is there a straightforward way to add a field to the arrow_dataset, prior to performing map?
{ "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sarahwie", "id": 8027676, "login": "sarahwie", "node_id": "MDQ6VXNlcjgwMjc2NzY=", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "repos_url": "https://api.github.com/users/sarahwie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "type": "User", "url": "https://api.github.com/users/sarahwie", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/256/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/256/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/255/comments
https://api.github.com/repos/huggingface/datasets/issues/255/events
https://github.com/huggingface/datasets/pull/255
635,300,822
MDExOlB1bGxSZXF1ZXN0NDMxNjg3MDM0
255
Add dataset/piaf
{ "avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4", "events_url": "https://api.github.com/users/RachelKer/events{/privacy}", "followers_url": "https://api.github.com/users/RachelKer/followers", "following_url": "https://api.github.com/users/RachelKer/following{/other_user}", "gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RachelKer", "id": 36986299, "login": "RachelKer", "node_id": "MDQ6VXNlcjM2OTg2Mjk5", "organizations_url": "https://api.github.com/users/RachelKer/orgs", "received_events_url": "https://api.github.com/users/RachelKer/received_events", "repos_url": "https://api.github.com/users/RachelKer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions", "type": "User", "url": "https://api.github.com/users/RachelKer", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-09 10:16:01+00:00
2020-06-12 08:31:27+00:00
2020-06-12 08:31:27+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/255.diff", "html_url": "https://github.com/huggingface/datasets/pull/255", "merged_at": "2020-06-12T08:31:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/255.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/255" }
Small SQuAD-like French QA dataset [PIAF](https://www.aclweb.org/anthology/2020.lrec-1.673.pdf)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/255/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/255/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/254
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/254/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/254/comments
https://api.github.com/repos/huggingface/datasets/issues/254/events
https://github.com/huggingface/datasets/issues/254
635,057,568
MDU6SXNzdWU2MzUwNTc1Njg=
254
[Feature request] Be able to remove a specific sample of the dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/astariul", "id": 43774355, "login": "astariul", "node_id": "MDQ6VXNlcjQzNzc0MzU1", "organizations_url": "https://api.github.com/users/astariul/orgs", "received_events_url": "https://api.github.com/users/astariul/received_events", "repos_url": "https://api.github.com/users/astariul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "type": "User", "url": "https://api.github.com/users/astariul", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-09 02:22:13+00:00
2020-06-09 08:41:38+00:00
2020-06-09 08:41:38+00:00
NONE
null
null
null
null
As mentioned in #117, it's currently not possible to remove a sample of the dataset. But it is a important use case : After applying some preprocessing, some samples might be empty for example. We should be able to remove these samples from the dataset, or at least mark them as `removed` so when iterating the dataset, we don't iterate these samples. I think it should be a feature. What do you think ? --- Any work-around in the meantime ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/astariul", "id": 43774355, "login": "astariul", "node_id": "MDQ6VXNlcjQzNzc0MzU1", "organizations_url": "https://api.github.com/users/astariul/orgs", "received_events_url": "https://api.github.com/users/astariul/received_events", "repos_url": "https://api.github.com/users/astariul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "type": "User", "url": "https://api.github.com/users/astariul", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/254/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/254/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/253
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/253/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/253/comments
https://api.github.com/repos/huggingface/datasets/issues/253/events
https://github.com/huggingface/datasets/pull/253
634,791,939
MDExOlB1bGxSZXF1ZXN0NDMxMjgwOTYz
253
add flue dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
[]
closed
false
null
[]
null
10
2020-06-08 17:11:09+00:00
2023-09-24 09:46:03+00:00
2020-07-16 07:50:59+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/253.diff", "html_url": "https://github.com/huggingface/datasets/pull/253", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/253.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/253" }
This PR add the Flue dataset as requested in this issue #223 . @lbourdois made a detailed description in that issue.
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/253/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/253/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/252/comments
https://api.github.com/repos/huggingface/datasets/issues/252/events
https://github.com/huggingface/datasets/issues/252
634,563,239
MDU6SXNzdWU2MzQ1NjMyMzk=
252
NonMatchingSplitsSizesError error when reading the IMDB dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4", "events_url": "https://api.github.com/users/antmarakis/events{/privacy}", "followers_url": "https://api.github.com/users/antmarakis/followers", "following_url": "https://api.github.com/users/antmarakis/following{/other_user}", "gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/antmarakis", "id": 17463361, "login": "antmarakis", "node_id": "MDQ6VXNlcjE3NDYzMzYx", "organizations_url": "https://api.github.com/users/antmarakis/orgs", "received_events_url": "https://api.github.com/users/antmarakis/received_events", "repos_url": "https://api.github.com/users/antmarakis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions", "type": "User", "url": "https://api.github.com/users/antmarakis", "user_view_type": "public" }
[]
closed
false
null
[]
null
4
2020-06-08 12:26:24+00:00
2021-08-27 15:20:58+00:00
2020-06-08 14:01:26+00:00
NONE
null
null
null
null
Hi! I am trying to load the `imdb` dataset with this line: `dataset = nlp.load_dataset('imdb', data_dir='/A/PATH', cache_dir='/A/PATH')` but I am getting the following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/load.py", line 517, in load_dataset save_infos=save_infos, File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 363, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 421, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=5929447, num_examples=4537, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] ``` Am I overlooking something? Thanks!
{ "avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4", "events_url": "https://api.github.com/users/antmarakis/events{/privacy}", "followers_url": "https://api.github.com/users/antmarakis/followers", "following_url": "https://api.github.com/users/antmarakis/following{/other_user}", "gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/antmarakis", "id": 17463361, "login": "antmarakis", "node_id": "MDQ6VXNlcjE3NDYzMzYx", "organizations_url": "https://api.github.com/users/antmarakis/orgs", "received_events_url": "https://api.github.com/users/antmarakis/received_events", "repos_url": "https://api.github.com/users/antmarakis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions", "type": "User", "url": "https://api.github.com/users/antmarakis", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/252/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/252/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/251/comments
https://api.github.com/repos/huggingface/datasets/issues/251/events
https://github.com/huggingface/datasets/pull/251
634,544,977
MDExOlB1bGxSZXF1ZXN0NDMxMDgwMDkw
251
Better access to all dataset information
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2020-06-08 11:56:50+00:00
2020-06-12 08:13:00+00:00
2020-06-12 08:12:58+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/251.diff", "html_url": "https://github.com/huggingface/datasets/pull/251", "merged_at": "2020-06-12T08:12:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/251.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/251" }
Moves all the dataset info down one level from `dataset.info.XXX` to `dataset.XXX` This way it's easier to access `dataset.feature['label']` for instance Also, add the original split instructions used to create the dataset in `dataset.split` Ex: ``` from nlp import load_dataset stsb = load_dataset('glue', name='stsb', split='train') stsb.split >>> NamedSplit('train') ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/251/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/251/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/250
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/250/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/250/comments
https://api.github.com/repos/huggingface/datasets/issues/250/events
https://github.com/huggingface/datasets/pull/250
634,416,751
MDExOlB1bGxSZXF1ZXN0NDMwOTcyMzg4
250
Remove checksum download in c4
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2020-06-08 09:13:00+00:00
2020-08-25 07:04:56+00:00
2020-06-08 09:16:59+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/250.diff", "html_url": "https://github.com/huggingface/datasets/pull/250", "merged_at": "2020-06-08T09:16:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/250.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/250" }
There was a line from the original tfds script that was still there and causing issues when loading the c4 script. This one should fix #233 and allow anyone to load the c4 script to generate the dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/250/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/250/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/249
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/249/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/249/comments
https://api.github.com/repos/huggingface/datasets/issues/249/events
https://github.com/huggingface/datasets/issues/249
633,393,443
MDU6SXNzdWU2MzMzOTM0NDM=
249
[Dataset created] some critical small issues when I was creating a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
null
2
2020-06-07 12:58:54+00:00
2020-06-12 08:28:51+00:00
2020-06-12 08:28:51+00:00
CONTRIBUTOR
null
null
null
null
Hi, I successfully created a dataset and has made a pr #248. But I have encountered several problems when I was creating it, and those should be easy to fix. 1. Not found dataset_info.json should be fixed by #241 , eager to wait it be merged. 2. Forced to install `apach_beam` If we should install it, then it might be better to include it in the pakcage dependency or specified in `CONTRIBUTING.md` ``` Traceback (most recent call last): File "nlp-cli", line 10, in <module> from nlp.commands.run_beam import RunBeamCommand File "/home/yisiang/nlp/src/nlp/commands/run_beam.py", line 6, in <module> import apache_beam as beam ModuleNotFoundError: No module named 'apache_beam' ``` 3. `cached_dir` is `None` ``` File "/home/yisiang/nlp/src/nlp/datasets/bookscorpus/aea0bd5142d26df645a8fce23d6110bb95ecb81772bb2a1f29012e329191962c/bookscorpus.py", line 88, in _split_generators downloaded_path_or_paths = dl_manager.download_custom(_GDRIVE_FILE_ID, download_file_from_google_drive) File "/home/yisiang/nlp/src/nlp/utils/download_manager.py", line 128, in download_custom downloaded_path_or_paths = map_nested(url_to_downloaded_path, url_or_urls) File "/home/yisiang/nlp/src/nlp/utils/py_utils.py", line 172, in map_nested return function(data_struct) File "/home/yisiang/nlp/src/nlp/utils/download_manager.py", line 126, in url_to_downloaded_path return os.path.join(self._download_config.cache_dir, hash_url_to_filename(url)) File "/home/yisiang/miniconda3/envs/nlppr/lib/python3.7/posixpath.py", line 80, in join a = os.fspath(a) ``` This is because this line https://github.com/huggingface/nlp/blob/2e0a8639a79b1abc848cff5c669094d40bba0f63/src/nlp/commands/test.py#L30-L32 And I add `--cache_dir="...."` to `python nlp-cli test datasets/<your-dataset-folder> --save_infos --all_configs` in the doc, finally I could pass this error. But it seems to ignore my arg and use `/home/yisiang/.cache/huggingface/datasets/bookscorpus/plain_text/1.0.0` as cahe_dir 4. There is no `pytest` So maybe in the doc we should specify a step to install pytest 5. Not enough capacity in my `/tmp` When run test for dummy data, I don't know why it ask me for 5.6g to download something, ``` def download_and_prepare ... if not utils.has_sufficient_disk_space(self.info.size_in_bytes or 0, directory=self._cache_dir_root): raise IOError( "Not enough disk space. Needed: {} (download: {}, generated: {})".format( utils.size_str(self.info.size_in_bytes or 0), utils.size_str(self.info.download_size or 0), > utils.size_str(self.info.dataset_size or 0), ) ) E OSError: Not enough disk space. Needed: 5.62 GiB (download: 1.10 GiB, generated: 4.52 GiB) ``` I add a `processed_temp_dir="some/dir"; raw_temp_dir="another/dir"` to 71, and the test passed https://github.com/huggingface/nlp/blob/a67a6c422dece904b65d18af65f0e024e839dbe8/tests/test_dataset_common.py#L70-L72 I suggest we can create tmp dir under the `/home/user/tmp` but not `/tmp`, because take our lab server for example, everyone use `/tmp` thus it has not much capacity. Or at least we can improve error message, so the user know is what directory has no space and how many has it lefted. Or we could do both. 6. name of datasets I was surprised by the dataset name `books_corpus`, and didn't know it is from `class BooksCorpus(nlp.GeneratorBasedBuilder)` . I change it to `Bookscorpus` afterwards. I think this point shold be also on the doc. 7. More thorough doc to how to create `dataset.py` I believe there will be. **Feel free to close this issue** if you think these are solved.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/249/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/249/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/248
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/248/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/248/comments
https://api.github.com/repos/huggingface/datasets/issues/248/events
https://github.com/huggingface/datasets/pull/248
633,390,427
MDExOlB1bGxSZXF1ZXN0NDMwMDQ0MzU0
248
add Toronto BooksCorpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang", "user_view_type": "public" }
[]
closed
false
null
[]
null
11
2020-06-07 12:54:56+00:00
2020-06-12 08:45:03+00:00
2020-06-12 08:45:02+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/248.diff", "html_url": "https://github.com/huggingface/datasets/pull/248", "merged_at": "2020-06-12T08:45:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/248.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/248" }
1. I knew there is a branch `toronto_books_corpus` - After I downloaded it, I found it is all non-english, and only have one row. - It seems that it cites the wrong paper - according to papar using it, it is called `BooksCorpus` but not `TornotoBooksCorpus` 2. It use a text mirror in google drive - `bookscorpus.py` include a function `download_file_from_google_drive` , maybe you will want to put it elsewhere. - text mirror is found in this [comment on the issue](https://github.com/soskek/bookcorpus/issues/24#issuecomment-556024973), and it said to have the same statistics as the one in the paper. - You may want to download it and put it on your gs in case of it disappears someday. 3. Copyright ? The paper has said > **The BookCorpus Dataset.** In order to train our sentence similarity model we collected a corpus of 11,038 books ***from the web***. These are __**free books written by yet unpublished authors**__. We only included books that had more than 20K words in order to filter out perhaps noisier shorter stories. The dataset has books in 16 different genres, e.g., Romance (2,865 books), Fantasy (1,479), Science fiction (786), Teen (430), etc. Table 2 highlights the summary statistics of our book corpus. and we have changed the form (not books), so I don't think it should have that problems. Or we can state that use it at your own risk or only for academic use. I know @thomwolf should know these things more. This should solved #131
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/248/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/248/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/247
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/247/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/247/comments
https://api.github.com/repos/huggingface/datasets/issues/247/events
https://github.com/huggingface/datasets/pull/247
632,380,078
MDExOlB1bGxSZXF1ZXN0NDI5MTMwMzQ2
247
Make all dataset downloads deterministic by applying `sorted` to glob and os.listdir
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[]
closed
false
null
[]
null
3
2020-06-06 11:02:10+00:00
2020-06-08 09:18:16+00:00
2020-06-08 09:18:14+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/247.diff", "html_url": "https://github.com/huggingface/datasets/pull/247", "merged_at": "2020-06-08T09:18:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/247.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/247" }
This PR makes all datasets loading deterministic by applying `sorted()` to all `glob.glob` and `os.listdir` statements. Are there other "non-deterministic" functions apart from `glob.glob()` and `os.listdir()` that you can think of @thomwolf @lhoestq @mariamabarham @jplu ? **Important** It does break backward compatibility for these datasets because 1. When loading the complete dataset the order in which the examples are saved is different now 2. When loading only part of a split, the examples themselves might be different. @patrickvonplaten - the nlp / longformer notebook has to be updated since the examples might now be different
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/247/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/247/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/246
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/246/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/246/comments
https://api.github.com/repos/huggingface/datasets/issues/246/events
https://github.com/huggingface/datasets/issues/246
632,380,054
MDU6SXNzdWU2MzIzODAwNTQ=
246
What is the best way to cache a dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/112599?v=4", "events_url": "https://api.github.com/users/Mistobaan/events{/privacy}", "followers_url": "https://api.github.com/users/Mistobaan/followers", "following_url": "https://api.github.com/users/Mistobaan/following{/other_user}", "gists_url": "https://api.github.com/users/Mistobaan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mistobaan", "id": 112599, "login": "Mistobaan", "node_id": "MDQ6VXNlcjExMjU5OQ==", "organizations_url": "https://api.github.com/users/Mistobaan/orgs", "received_events_url": "https://api.github.com/users/Mistobaan/received_events", "repos_url": "https://api.github.com/users/Mistobaan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mistobaan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mistobaan/subscriptions", "type": "User", "url": "https://api.github.com/users/Mistobaan", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2020-06-06 11:02:07+00:00
2020-07-09 09:15:07+00:00
2020-07-09 09:15:07+00:00
NONE
null
null
null
null
For example if I want to use streamlit with a nlp dataset: ``` @st.cache def load_data(): return nlp.load_dataset('squad') ``` This code raises the error "uncachable object" Right now I just fixed with a constant for my specific case: ``` @st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0}) ``` But I was curious to know what is the best way in general
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/246/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/246/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false