Commit 
							
							·
						
						2780222
	
1
								Parent(s):
							
							2bece07
								
Adding Evaluation Results (#6)
Browse files- Adding Evaluation Results (8d6b87066fff9ed5e561f917e900403dc1699c45)
Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>
    	
        README.md
    CHANGED
    
    | @@ -11,4 +11,17 @@ This is [Llama 2 13b](https://huggingface.co/meta-llama/Llama-2-13b-hf) with som | |
| 11 |  | 
| 12 | 
             
            Fine-tuned on ~10M tokens from RedPajama to settle in the transplants a little.
         | 
| 13 |  | 
| 14 | 
            -
            Not intended for use as-is - this model is meant to serve as a base for further tuning, hopefully with a greater capacity for learning than 13b.
         | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 11 |  | 
| 12 | 
             
            Fine-tuned on ~10M tokens from RedPajama to settle in the transplants a little.
         | 
| 13 |  | 
| 14 | 
            +
            Not intended for use as-is - this model is meant to serve as a base for further tuning, hopefully with a greater capacity for learning than 13b.
         | 
| 15 | 
            +
            # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
         | 
| 16 | 
            +
            Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama2-22b)
         | 
| 17 | 
            +
             | 
| 18 | 
            +
            | Metric                | Value                     |
         | 
| 19 | 
            +
            |-----------------------|---------------------------|
         | 
| 20 | 
            +
            | Avg.                  | 46.85   |
         | 
| 21 | 
            +
            | ARC (25-shot)         | 58.53          |
         | 
| 22 | 
            +
            | HellaSwag (10-shot)   | 82.55    |
         | 
| 23 | 
            +
            | MMLU (5-shot)         | 54.68         |
         | 
| 24 | 
            +
            | TruthfulQA (0-shot)   | 39.84   |
         | 
| 25 | 
            +
            | Winogrande (5-shot)   | 76.32   |
         | 
| 26 | 
            +
            | GSM8K (5-shot)        | 9.93        |
         | 
| 27 | 
            +
            | DROP (3-shot)         | 6.08         |
         | 

