Datasets:
				
			
			
	
			
			
	
		
		update total task number
Browse files
    	
        README.md
    CHANGED
    
    | @@ -77,7 +77,7 @@ The UnpredicTable dataset consists of web tables formatted as few-shot tasks for | |
| 77 |  | 
| 78 | 
             
            There are several dataset versions available:
         | 
| 79 |  | 
| 80 | 
            -
            * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413, | 
| 81 |  | 
| 82 | 
             
            * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
         | 
| 83 |  | 
| @@ -188,7 +188,7 @@ The UnpredicTable datasets do not come with additional data splits. | |
| 188 |  | 
| 189 | 
             
            ### Curation Rationale
         | 
| 190 |  | 
| 191 | 
            -
            Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413, | 
| 192 |  | 
| 193 | 
             
            ### Source Data
         | 
| 194 |  | 
|  | |
| 77 |  | 
| 78 | 
             
            There are several dataset versions available:
         | 
| 79 |  | 
| 80 | 
            +
            * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
         | 
| 81 |  | 
| 82 | 
             
            * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
         | 
| 83 |  | 
|  | |
| 188 |  | 
| 189 | 
             
            ### Curation Rationale
         | 
| 190 |  | 
| 191 | 
            +
            Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
         | 
| 192 |  | 
| 193 | 
             
            ### Source Data
         | 
| 194 |  | 
