| url
				 stringlengths 58 61 | repository_url
				 stringclasses 1
				value | labels_url
				 stringlengths 72 75 | comments_url
				 stringlengths 67 70 | events_url
				 stringlengths 65 68 | html_url
				 stringlengths 46 51 | id
				 int64 599M 3.22B | node_id
				 stringlengths 18 32 | number
				 int64 1 7.68k | title
				 stringlengths 1 290 | user
				 dict | labels
				 listlengths 0 4 | state
				 stringclasses 2
				values | locked
				 bool 1
				class | assignee
				 dict | assignees
				 listlengths 0 4 | milestone
				 dict | comments
				 listlengths 0 30 | created_at
				 timestamp[ns, tz=UTC]date 2020-04-14 10:18:02 2025-07-12 04:48:30 | updated_at
				 timestamp[ns, tz=UTC]date 2020-04-27 16:04:17 2025-07-12 17:43:05 | closed_at
				 timestamp[ns, tz=UTC]date 2020-04-14 12:01:40 2025-07-11 17:42:01 ⌀ | author_association
				 stringclasses 4
				values | type
				 null | active_lock_reason
				 null | sub_issues_summary
				 dict | body
				 stringlengths 0 228k ⌀ | closed_by
				 dict | reactions
				 dict | timeline_url
				 stringlengths 67 70 | performed_via_github_app
				 null | state_reason
				 stringclasses 4
				values | draft
				 bool 2
				classes | pull_request
				 dict | is_pull_request
				 bool 2
				classes | 
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 
	https://api.github.com/repos/huggingface/datasets/issues/6 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/6/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/6/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/6/events | 
	https://github.com/huggingface/datasets/issues/6 | 600,330,836 | 
	MDU6SXNzdWU2MDAzMzA4MzY= | 6 | 
	Error when citation is not given in the DatasetInfo | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
  "events_url": "https://api.github.com/users/jplu/events{/privacy}",
  "followers_url": "https://api.github.com/users/jplu/followers",
  "following_url": "https://api.github.com/users/jplu/following{/other_user}",
  "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/jplu",
  "id": 959590,
  "login": "jplu",
  "node_id": "MDQ6VXNlcjk1OTU5MA==",
  "organizations_url": "https://api.github.com/users/jplu/orgs",
  "received_events_url": "https://api.github.com/users/jplu/received_events",
  "repos_url": "https://api.github.com/users/jplu/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/jplu",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "Yes looks good to me.\r\nNote that we may refactor quite strongly the `info.py` to make it a lot simpler (it's very complicated for basically a dictionary of info I think)",
  "No, problem ^^ It might just be a temporary fix :)",
  "Fixed."
] | 2020-04-15T14:14:54Z | 2020-04-29T09:23:22Z | 2020-04-29T09:23:22Z | 
	CONTRIBUTOR | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	The following error is raised when the `citation` parameter is missing when we instantiate a `DatasetInfo`:
```
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/jplu/dev/jplu/datasets/src/nlp/info.py", line 338, in __repr__
    citation_pprint = _indent('"""{}"""'.format(self.citation.strip()))
AttributeError: 'NoneType' object has no attribute 'strip'
```
I propose to do the following change in the `info.py` file. The method:
```python
def __repr__(self):
        splits_pprint = _indent("\n".join(["{"] + [
                "    '{}': {},".format(k, split.num_examples)
                for k, split in sorted(self.splits.items())
        ] + ["}"]))
        features_pprint = _indent(repr(self.features))
        citation_pprint = _indent('"""{}"""'.format(self.citation.strip()))
        return INFO_STR.format(
                name=self.name,
                version=self.version,
                description=self.description,
                total_num_examples=self.splits.total_num_examples,
                features=features_pprint,
                splits=splits_pprint,
                citation=citation_pprint,
                homepage=self.homepage,
                supervised_keys=self.supervised_keys,
                # Proto add a \n that we strip.
                license=str(self.license).strip())
```
Becomes:
```python
def __repr__(self):
        splits_pprint = _indent("\n".join(["{"] + [
                "    '{}': {},".format(k, split.num_examples)
                for k, split in sorted(self.splits.items())
        ] + ["}"]))
        features_pprint = _indent(repr(self.features))
        ## the strip is done only is the citation is given
        citation_pprint = self.citation
        if self.citation:
            citation_pprint = _indent('"""{}"""'.format(self.citation.strip()))
        return INFO_STR.format(
                name=self.name,
                version=self.version,
                description=self.description,
                total_num_examples=self.splits.total_num_examples,
                features=features_pprint,
                splits=splits_pprint,
                citation=citation_pprint,
                homepage=self.homepage,
                supervised_keys=self.supervised_keys,
                # Proto add a \n that we strip.
                license=str(self.license).strip())
```
And now it is ok. @thomwolf are you ok with this fix? | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
  "events_url": "https://api.github.com/users/jplu/events{/privacy}",
  "followers_url": "https://api.github.com/users/jplu/followers",
  "following_url": "https://api.github.com/users/jplu/following{/other_user}",
  "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/jplu",
  "id": 959590,
  "login": "jplu",
  "node_id": "MDQ6VXNlcjk1OTU5MA==",
  "organizations_url": "https://api.github.com/users/jplu/orgs",
  "received_events_url": "https://api.github.com/users/jplu/received_events",
  "repos_url": "https://api.github.com/users/jplu/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/jplu",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/6/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/6/timeline | null | 
	completed | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/5 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/5/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/5/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/5/events | 
	https://github.com/huggingface/datasets/issues/5 | 600,295,889 | 
	MDU6SXNzdWU2MDAyOTU4ODk= | 5 | 
	ValueError when a split is empty | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
  "events_url": "https://api.github.com/users/jplu/events{/privacy}",
  "followers_url": "https://api.github.com/users/jplu/followers",
  "following_url": "https://api.github.com/users/jplu/following{/other_user}",
  "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/jplu",
  "id": 959590,
  "login": "jplu",
  "node_id": "MDQ6VXNlcjk1OTU5MA==",
  "organizations_url": "https://api.github.com/users/jplu/orgs",
  "received_events_url": "https://api.github.com/users/jplu/received_events",
  "repos_url": "https://api.github.com/users/jplu/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/jplu",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "To fix this I propose to modify only the file `arrow_reader.py` with few updates. First update, the following method:\r\n```python\r\ndef _make_file_instructions_from_absolutes(\r\n        name,\r\n        name2len,\r\n        absolute_instructions,\r\n):\r\n    \"\"\"Returns the files instructions from the absolute instructions list.\"\"\"\r\n    # For each split, return the files instruction (skip/take)\r\n    file_instructions = []\r\n    num_examples = 0\r\n    for abs_instr in absolute_instructions:\r\n        length = name2len[abs_instr.splitname]\r\n        if not length:\r\n            raise ValueError(\r\n                    'Split empty. This might means that dataset hasn\\'t been generated '\r\n                    'yet and info not restored from GCS, or that legacy dataset is used.')\r\n        filename = filename_for_dataset_split(\r\n                dataset_name=name,\r\n                split=abs_instr.splitname,\r\n                filetype_suffix='arrow')\r\n        from_ = 0 if abs_instr.from_ is None else abs_instr.from_\r\n        to = length if abs_instr.to is None else abs_instr.to\r\n        num_examples += to - from_\r\n        single_file_instructions = [{\"filename\": filename, \"skip\": from_, \"take\": to - from_}]\r\n        file_instructions.extend(single_file_instructions)\r\n    return FileInstructions(\r\n            num_examples=num_examples,\r\n            file_instructions=file_instructions,\r\n    )\r\n```\r\nBecomes:\r\n```python\r\ndef _make_file_instructions_from_absolutes(\r\n        name,\r\n        name2len,\r\n        absolute_instructions,\r\n):\r\n    \"\"\"Returns the files instructions from the absolute instructions list.\"\"\"\r\n    # For each split, return the files instruction (skip/take)\r\n    file_instructions = []\r\n    num_examples = 0\r\n    for abs_instr in absolute_instructions:\r\n        length = name2len[abs_instr.splitname]\r\n        ## Delete the if not length and the raise\r\n        filename = filename_for_dataset_split(\r\n                dataset_name=name,\r\n                split=abs_instr.splitname,\r\n                filetype_suffix='arrow')\r\n        from_ = 0 if abs_instr.from_ is None else abs_instr.from_\r\n        to = length if abs_instr.to is None else abs_instr.to\r\n        num_examples += to - from_\r\n        single_file_instructions = [{\"filename\": filename, \"skip\": from_, \"take\": to - from_}]\r\n        file_instructions.extend(single_file_instructions)\r\n    return FileInstructions(\r\n            num_examples=num_examples,\r\n            file_instructions=file_instructions,\r\n    )\r\n```\r\n\r\nSecond update the following method:\r\n```python\r\ndef _read_files(files, info):\r\n    \"\"\"Returns Dataset for given file instructions.\r\n\r\n    Args:\r\n        files: List[dict(filename, skip, take)], the files information.\r\n            The filenames contain the absolute path, not relative.\r\n            skip/take indicates which example read in the file: `ds.slice(skip, take)`\r\n    \"\"\"\r\n    pa_batches = []\r\n    for f_dict in files:\r\n        pa_table: pa.Table = _get_dataset_from_filename(f_dict)\r\n        pa_batches.extend(pa_table.to_batches())\r\n    pa_table = pa.Table.from_batches(pa_batches)\r\n    ds = Dataset(arrow_table=pa_table, data_files=files, info=info)\r\n    return ds\r\n```\r\nBecomes:\r\n```python\r\ndef _read_files(files, info):\r\n    \"\"\"Returns Dataset for given file instructions.\r\n\r\n    Args:\r\n        files: List[dict(filename, skip, take)], the files information.\r\n            The filenames contain the absolute path, not relative.\r\n            skip/take indicates which example read in the file: `ds.slice(skip, take)`\r\n    \"\"\"\r\n    pa_batches = []\r\n    for f_dict in files:\r\n        pa_table: pa.Table = _get_dataset_from_filename(f_dict)\r\n        pa_batches.extend(pa_table.to_batches())\r\n    ## we modify the table only if there are some batches\r\n    if pa_batches:\r\n        pa_table = pa.Table.from_batches(pa_batches)\r\n    ds = Dataset(arrow_table=pa_table, data_files=files, info=info)\r\n    return ds\r\n```\r\n\r\nWith these two updates it works now. @thomwolf are you ok with this changes?",
  "Yes sounds good to me!\r\nDo you want to make a PR? or I can do it as well",
  "Fixed."
] | 2020-04-15T13:25:13Z | 2020-04-29T09:23:05Z | 2020-04-29T09:23:05Z | 
	CONTRIBUTOR | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	When a split is empty either TEST, VALIDATION or TRAIN I get the following error:
```
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/jplu/dev/jplu/datasets/src/nlp/load.py", line 295, in load
    ds = dbuilder.as_dataset(**as_dataset_kwargs)
  File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 587, in as_dataset
    datasets = utils.map_nested(build_single_dataset, split, map_tuple=True)
  File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 158, in map_nested
    for k, v in data_struct.items()
  File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 158, in <dictcomp>
    for k, v in data_struct.items()
  File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 172, in map_nested
    return function(data_struct)
  File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 601, in _build_single_dataset
    split=split,
  File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 625, in _as_dataset
    split_infos=self.info.splits.values(),
  File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 200, in read
    return py_utils.map_nested(_read_instruction_to_ds, instructions)
  File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 172, in map_nested
    return function(data_struct)
  File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 191, in _read_instruction_to_ds
    file_instructions = make_file_instructions(name, split_infos, instruction)
  File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 104, in make_file_instructions
    absolute_instructions=absolute_instructions,
  File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 122, in _make_file_instructions_from_absolutes
    'Split empty. This might means that dataset hasn\'t been generated '
ValueError: Split empty. This might means that dataset hasn't been generated yet and info not restored from GCS, or that legacy dataset is used.
``` 
How to reproduce:
```python
import csv
import nlp
class Bbc(nlp.GeneratorBasedBuilder):
    VERSION = nlp.Version("1.0.0")
    def __init__(self, **config):
        self.train = config.pop("train", None)
        self.validation = config.pop("validation", None)
        super(Bbc, self).__init__(**config)
    def _info(self):
        return nlp.DatasetInfo(builder=self, description="bla", features=nlp.features.FeaturesDict({"id": nlp.int32, "text": nlp.string, "label": nlp.string}))
    def _split_generators(self, dl_manager):
        return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"filepath": self.train}),
                nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={"filepath": self.validation}),
                nlp.SplitGenerator(name=nlp.Split.TEST, gen_kwargs={"filepath": None})]
    def _generate_examples(self, filepath):
        if not filepath:
            return None, {}
        with open(filepath) as f:
            reader = csv.reader(f, delimiter=',', quotechar="\"")
            lines = list(reader)[1:]
            for idx, line in enumerate(lines):
                yield idx, {"id": idx, "text": line[1], "label": line[0]}
```
```python
import nlp
dataset = nlp.load("bbc", builder_kwargs={"train": "bbc/data/train.csv", "validation": "bbc/data/test.csv"})
``` | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
  "events_url": "https://api.github.com/users/jplu/events{/privacy}",
  "followers_url": "https://api.github.com/users/jplu/followers",
  "following_url": "https://api.github.com/users/jplu/following{/other_user}",
  "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/jplu",
  "id": 959590,
  "login": "jplu",
  "node_id": "MDQ6VXNlcjk1OTU5MA==",
  "organizations_url": "https://api.github.com/users/jplu/orgs",
  "received_events_url": "https://api.github.com/users/jplu/received_events",
  "repos_url": "https://api.github.com/users/jplu/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/jplu",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/5/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/5/timeline | null | 
	completed | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/4 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/4/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/4/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/4/events | 
	https://github.com/huggingface/datasets/issues/4 | 600,185,417 | 
	MDU6SXNzdWU2MDAxODU0MTc= | 4 | 
	[Feature] Keep the list of labels of a dataset as metadata | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
  "events_url": "https://api.github.com/users/jplu/events{/privacy}",
  "followers_url": "https://api.github.com/users/jplu/followers",
  "following_url": "https://api.github.com/users/jplu/following{/other_user}",
  "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/jplu",
  "id": 959590,
  "login": "jplu",
  "node_id": "MDQ6VXNlcjk1OTU5MA==",
  "organizations_url": "https://api.github.com/users/jplu/orgs",
  "received_events_url": "https://api.github.com/users/jplu/received_events",
  "repos_url": "https://api.github.com/users/jplu/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/jplu",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "Yes! I see mostly two options for this:\r\n- a `Feature` approach like currently (but we might deprecate features)\r\n- wrapping in a smart way the Dictionary arrays of Arrow: https://arrow.apache.org/docs/python/data.html?highlight=dictionary%20encode#dictionary-arrays",
  "I would have a preference for the second bullet point.",
  "This should be accessible now as a feature in dataset.info.features (and even have the mapping methods).",
  "Perfect! Well done!!",
  "Hi,\r\nI hope we could get a better documentation.\r\nIt took me more than 1 hour to found this way to get the label information.",
  "Yes we are working on the doc right now, should be in the next release quite soon."
] | 2020-04-15T10:17:10Z | 2020-07-08T16:59:46Z | 2020-05-04T06:11:57Z | 
	CONTRIBUTOR | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata. | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
  "events_url": "https://api.github.com/users/jplu/events{/privacy}",
  "followers_url": "https://api.github.com/users/jplu/followers",
  "following_url": "https://api.github.com/users/jplu/following{/other_user}",
  "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/jplu",
  "id": 959590,
  "login": "jplu",
  "node_id": "MDQ6VXNlcjk1OTU5MA==",
  "organizations_url": "https://api.github.com/users/jplu/orgs",
  "received_events_url": "https://api.github.com/users/jplu/received_events",
  "repos_url": "https://api.github.com/users/jplu/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/jplu",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/4/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/4/timeline | null | 
	completed | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/3 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/3/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/3/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/3/events | 
	https://github.com/huggingface/datasets/issues/3 | 600,180,050 | 
	MDU6SXNzdWU2MDAxODAwNTA= | 3 | 
	[Feature] More dataset outputs | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
  "events_url": "https://api.github.com/users/jplu/events{/privacy}",
  "followers_url": "https://api.github.com/users/jplu/followers",
  "following_url": "https://api.github.com/users/jplu/following{/other_user}",
  "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/jplu",
  "id": 959590,
  "login": "jplu",
  "node_id": "MDQ6VXNlcjk1OTU5MA==",
  "organizations_url": "https://api.github.com/users/jplu/orgs",
  "received_events_url": "https://api.github.com/users/jplu/received_events",
  "repos_url": "https://api.github.com/users/jplu/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/jplu",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "Yes!\r\n- pandas will be a one-liner in `arrow_dataset`: https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.to_pandas\r\n- for Spark I have no idea. let's investigate that at some point",
  "For Spark it looks to be pretty straightforward as well https://spark.apache.org/docs/latest/sql-pyspark-pandas-with-arrow.html but looks to be having a dependency to Spark is necessary, then nevermind we can skip it",
  "Now Pandas is available."
] | 2020-04-15T10:08:14Z | 2020-05-04T06:12:27Z | 2020-05-04T06:12:27Z | 
	CONTRIBUTOR | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	Add the following dataset outputs:
- Spark
- Pandas | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
  "events_url": "https://api.github.com/users/jplu/events{/privacy}",
  "followers_url": "https://api.github.com/users/jplu/followers",
  "following_url": "https://api.github.com/users/jplu/following{/other_user}",
  "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/jplu",
  "id": 959590,
  "login": "jplu",
  "node_id": "MDQ6VXNlcjk1OTU5MA==",
  "organizations_url": "https://api.github.com/users/jplu/orgs",
  "received_events_url": "https://api.github.com/users/jplu/received_events",
  "repos_url": "https://api.github.com/users/jplu/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/jplu",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/3/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/3/timeline | null | 
	completed | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/2 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/2/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/2/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/2/events | 
	https://github.com/huggingface/datasets/issues/2 | 599,767,671 | 
	MDU6SXNzdWU1OTk3Njc2NzE= | 2 | 
	Issue to read a local dataset | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
  "events_url": "https://api.github.com/users/jplu/events{/privacy}",
  "followers_url": "https://api.github.com/users/jplu/followers",
  "following_url": "https://api.github.com/users/jplu/following{/other_user}",
  "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/jplu",
  "id": 959590,
  "login": "jplu",
  "node_id": "MDQ6VXNlcjk1OTU5MA==",
  "organizations_url": "https://api.github.com/users/jplu/orgs",
  "received_events_url": "https://api.github.com/users/jplu/received_events",
  "repos_url": "https://api.github.com/users/jplu/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/jplu",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "My first bug report ❤️\r\nLooking into this right now!",
  "Ok, there are some news, most good than bad :laughing: \r\n\r\nThe dataset script now became:\r\n```python\r\nimport csv\r\n\r\nimport nlp\r\n\r\n\r\nclass Bbc(nlp.GeneratorBasedBuilder):\r\n    VERSION = nlp.Version(\"1.0.0\")\r\n\r\n    def __init__(self, **config):\r\n        self.train = config.pop(\"train\", None)\r\n        self.validation = config.pop(\"validation\", None)\r\n        super(Bbc, self).__init__(**config)\r\n\r\n    def _info(self):\r\n        return nlp.DatasetInfo(builder=self, description=\"bla\", features=nlp.features.FeaturesDict({\"id\": nlp.int32, \"text\": nlp.string, \"label\": nlp.string}))\r\n\r\n    def _split_generators(self, dl_manager):\r\n        return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"filepath\": self.train}),\r\n                nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={\"filepath\": self.validation})]\r\n\r\n    def _generate_examples(self, filepath):\r\n        with open(filepath) as f:\r\n            reader = csv.reader(f, delimiter=',', quotechar=\"\\\"\")\r\n            lines = list(reader)[1:]\r\n\r\n            for idx, line in enumerate(lines):\r\n                yield idx, {\"id\": idx, \"text\": line[1], \"label\": line[0]}\r\n\r\n```\r\n\r\nAnd the dataset folder becomes:\r\n```\r\n.\r\n├── bbc\r\n│   ├── bbc.py\r\n│   └── data\r\n│       ├── test.csv\r\n│       └── train.csv\r\n```\r\nI can load the dataset by using the keywords arguments like this:\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\"bbc\", builder_kwargs={\"train\": \"bbc/data/train.csv\", \"validation\": \"bbc/data/test.csv\"})\r\n```\r\n\r\nThat was the good part ^^ Because it took me some time to understand that the script itself is put in cache in `datasets/src/nlp/datasets/some-hash/bbc.py` which is very difficult to discover without checking the source code. It means that doesn't matter the changes you do to your original script it is taken into account. I think instead of doing a hash on the name (I suppose it is the name), a hash on the content of the script itself should be a better solution.\r\n\r\nThen by diving a bit in the code I found the `force_reload` parameter [here](https://github.com/huggingface/datasets/blob/master/src/nlp/load.py#L50) but the call of this `load_dataset` method is done with the `builder_kwargs` as seen [here](https://github.com/huggingface/datasets/blob/master/src/nlp/load.py#L166) which is ok until the call to the builder is done as the builder do not have this `force_reload` parameter. To show as example, the previous load becomes:\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\"bbc\", builder_kwargs={\"train\": \"bbc/data/train.csv\", \"validation\": \"bbc/data/test.csv\", \"force_reload\": True})\r\n```\r\nRaises\r\n```\r\nTraceback (most recent call last):\r\n  File \"<stdin>\", line 1, in <module>\r\n  File \"/home/jplu/dev/jplu/datasets/src/nlp/load.py\", line 283, in load\r\n    dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)\r\n  File \"/home/jplu/dev/jplu/datasets/src/nlp/load.py\", line 170, in builder\r\n    builder_instance = builder_cls(**builder_kwargs)\r\n  File \"/home/jplu/dev/jplu/datasets/src/nlp/datasets/84d638d2a8ca919d1021a554e741766f50679dc6553d5a0612b6094311babd39/bbc.py\", line 12, in __init__\r\n    super(Bbc, self).__init__(**config)\r\nTypeError: __init__() got an unexpected keyword argument 'force_reload'\r\n```\r\nSo yes the cache is refreshed with the new script but then raises this error.",
  "Ok great, so as discussed today, let's:\r\n- have a main dataset directory inside the lib with sub-directories hashed by the content of the file\r\n- keep a cache for downloading the scripts from S3 for now\r\n- later: add methods to list and clean the local versions of the datasets (and the distant versions on S3 as well)\r\n\r\nSide question: do you often use `builder_kwargs` for other things than supplying file paths? I was thinking about having a more easy to read and remember `data_files` argument maybe.",
  "Good plan!\r\n\r\nYes I do use `builder_kwargs` for other things such as:\r\n- dataset name\r\n- properties to know how to properly read a CSV file: do I have to skip the first line in a CSV, which delimiter is used, and the columns ids to use.\r\n- properties to know how to properly read a JSON file: which properties in a JSON object to read",
  "Done!"
] | 2020-04-14T18:18:51Z | 2020-05-11T18:55:23Z | 2020-05-11T18:55:22Z | 
	CONTRIBUTOR | null | null | 
	{
  "completed": 0,
  "percent_completed": 0,
  "total": 0
} | 
	Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):
    def __init__(self, **kwargs):
        super(BbcConfig, self).__init__(**kwargs)
class Bbc(nlp.GeneratorBasedBuilder):
    _DIR = "./data"
    _DEV_FILE = "test.csv"
    _TRAINING_FILE = "train.csv"
    BUILDER_CONFIGS = [BbcConfig(name="bbc", version=nlp.Version("1.0.0"))]
    def _info(self):
        return nlp.DatasetInfo(builder=self, features=nlp.features.FeaturesDict({"id": nlp.string, "text": nlp.string, "label": nlp.string}))
    def _split_generators(self, dl_manager):
        files = {"train": os.path.join(self._DIR, self._TRAINING_FILE), "dev": os.path.join(self._DIR, self._DEV_FILE)}
        return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"filepath": files["train"]}),
                nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={"filepath": files["dev"]})]
    def _generate_examples(self, filepath):
        with open(filepath) as f:
            reader = csv.reader(f, delimiter=',', quotechar="\"")
            lines = list(reader)[1:]
            for idx, line in enumerate(lines):
                yield idx, {"idx": idx, "text": line[1], "label": line[0]}
```
The dataset is attached to this issue as well:
[data.zip](https://github.com/huggingface/datasets/files/4476928/data.zip)
Now the steps to reproduce what I would like to do:
1. unzip data locally (I know the nlp lib can detect and extract archives but I want to reduce and facilitate the reproduction as much as possible)
2. create the `bbc.py` script as above at the same location than the unziped `data` folder.
Now I try to load the dataset in three different ways and none works, the first one with the name of the dataset like I would do with TFDS:
```python
import nlp
from bbc import Bbc
dataset = nlp.load("bbc")
```
I get:
```
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load
    dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)
  File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder
    builder_cls = load_dataset(path, name=name, **builder_kwargs)
  File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 88, in load_dataset
    local_files_only=local_files_only,
  File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/utils/file_utils.py", line 214, in cached_path
    if not is_zipfile(output_path) and not tarfile.is_tarfile(output_path):
  File "/opt/anaconda3/envs/transformers/lib/python3.7/zipfile.py", line 203, in is_zipfile
    with open(filename, "rb") as fp:
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
But @thomwolf told me that no need to import the script, just put the path of it, then I tried three different way to do:
```python
import nlp
dataset = nlp.load("bbc.py")
```
And
```python
import nlp
dataset = nlp.load("./bbc.py")
```
And
```python
import nlp
dataset = nlp.load("/absolute/path/to/bbc.py")
```
These three ways gives me:
```
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load
    dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)
  File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder
    builder_cls = load_dataset(path, name=name, **builder_kwargs)
  File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 124, in load_dataset
    dataset_module = importlib.import_module(module_path)
  File "/opt/anaconda3/envs/transformers/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'nlp.datasets.2fd72627d92c328b3e9c4a3bf7ec932c48083caca09230cebe4c618da6e93688.bbc'
```
Any idea of what I'm missing? or I might have spot a bug :) | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
  "events_url": "https://api.github.com/users/jplu/events{/privacy}",
  "followers_url": "https://api.github.com/users/jplu/followers",
  "following_url": "https://api.github.com/users/jplu/following{/other_user}",
  "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/jplu",
  "id": 959590,
  "login": "jplu",
  "node_id": "MDQ6VXNlcjk1OTU5MA==",
  "organizations_url": "https://api.github.com/users/jplu/orgs",
  "received_events_url": "https://api.github.com/users/jplu/received_events",
  "repos_url": "https://api.github.com/users/jplu/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/jplu",
  "user_view_type": "public"
} | 
	{
  "+1": 1,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 1,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/2/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/2/timeline | null | 
	completed | null | null | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/1 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/1/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/1/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/1/events | 
	https://github.com/huggingface/datasets/pull/1 | 599,457,467 | 
	MDExOlB1bGxSZXF1ZXN0NDAzMDk1NDYw | 1 | 
	changing nlp.bool to nlp.bool_ | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
  "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
  "followers_url": "https://api.github.com/users/mariamabarham/followers",
  "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
  "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/mariamabarham",
  "id": 38249783,
  "login": "mariamabarham",
  "node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
  "organizations_url": "https://api.github.com/users/mariamabarham/orgs",
  "received_events_url": "https://api.github.com/users/mariamabarham/received_events",
  "repos_url": "https://api.github.com/users/mariamabarham/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/mariamabarham",
  "user_view_type": "public"
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[] | 2020-04-14T10:18:02Z | 2022-10-04T09:31:40Z | 2020-04-14T12:01:40Z | 
	CONTRIBUTOR | null | null | null | 
	{
  "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
  "events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
  "followers_url": "https://api.github.com/users/thomwolf/followers",
  "following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
  "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
  "gravatar_id": "",
  "html_url": "https://github.com/thomwolf",
  "id": 7353373,
  "login": "thomwolf",
  "node_id": "MDQ6VXNlcjczNTMzNzM=",
  "organizations_url": "https://api.github.com/users/thomwolf/orgs",
  "received_events_url": "https://api.github.com/users/thomwolf/received_events",
  "repos_url": "https://api.github.com/users/thomwolf/repos",
  "site_admin": false,
  "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
  "type": "User",
  "url": "https://api.github.com/users/thomwolf",
  "user_view_type": "public"
} | 
	{
  "+1": 0,
  "-1": 0,
  "confused": 0,
  "eyes": 0,
  "heart": 0,
  "hooray": 0,
  "laugh": 0,
  "rocket": 0,
  "total_count": 0,
  "url": "https://api.github.com/repos/huggingface/datasets/issues/1/reactions"
} | 
	https://api.github.com/repos/huggingface/datasets/issues/1/timeline | null | null | false | 
	{
  "diff_url": "https://github.com/huggingface/datasets/pull/1.diff",
  "html_url": "https://github.com/huggingface/datasets/pull/1",
  "merged_at": "2020-04-14T12:01:40Z",
  "patch_url": "https://github.com/huggingface/datasets/pull/1.patch",
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/1"
} | true | 
			Subsets and Splits
				
	
				
			
				
No community queries yet
The top public SQL queries from the community will appear here once available.