Merge branch 'main' of https://huggingface.co/datasets/Biomedical-TeMU/ProfNER_corpus_NER
Browse files
    	
        README.md
    CHANGED
    
    | @@ -1,3 +1,27 @@ | |
| 1 | 
             
            ---
         | 
| 2 | 
             
            license: cc-by-4.0
         | 
| 3 | 
             
            ---
         | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
             
            ---
         | 
| 2 | 
             
            license: cc-by-4.0
         | 
| 3 | 
             
            ---
         | 
| 4 | 
            +
             | 
| 5 | 
            +
            ## Description
         | 
| 6 | 
            +
             | 
| 7 | 
            +
            **Gold standard annotations for profession detection in Spanish COVID-19 tweets**
         | 
| 8 | 
            +
             | 
| 9 | 
            +
            The entire corpus contains 10,000 annotated tweets. It has been split into training, validation, and test (60-20-20). The current version contains the training and development set of the shared task with Gold Standard annotations. In addition, it contains the unannotated test, and background sets will be released.
         | 
| 10 | 
            +
             | 
| 11 | 
            +
            For Named Entity Recognition, profession detection, annotations are distributed in 2 formats: Brat standoff and TSV. See the Brat webpage for more information about the Brat standoff format (https://brat.nlplab.org/standoff.html). 
         | 
| 12 | 
            +
             | 
| 13 | 
            +
            The TSV format follows the format employed in SMM4H 2019 Task 2:
         | 
| 14 | 
            +
            tweet_id  |  begin  |  end  |  type  |  extraction
         | 
| 15 | 
            +
             | 
| 16 | 
            +
            In addition, we provide a tokenized version of the dataset. It follows the BIO format (similar to CONLL). The files were generated with the brat_to_conll.py script (included), which employs the es_core_news_sm-2.3.1 Spacy model for tokenization.
         | 
| 17 | 
            +
             | 
| 18 | 
            +
            ## Files of Named Entity Recognition subtask. 
         | 
| 19 | 
            +
             | 
| 20 | 
            +
            Content:
         | 
| 21 | 
            +
             | 
| 22 | 
            +
            - One TSV file per corpus split (train and valid).
         | 
| 23 | 
            +
            - brat: folder with annotations in Brat format. One sub-directory per corpus split (train and valid)
         | 
| 24 | 
            +
            - BIO: folder with corpus in BIO tagging. One file per corpus split (train and valid)
         | 
| 25 | 
            +
            - train-valid-txt-files: folder with training and validation text files. One text file per tweet. One sub-- directory per corpus split (train and valid)
         | 
| 26 | 
            +
            - train-valid-txt-files-english: folder with training and validation text files Machine Translated to English.
         | 
| 27 | 
            +
            - test-background-txt-files: folder with the test and background text files. You must make your predictions for these files and upload them to CodaLab.
         | 

