ALBERT
					Collection
				
Quantized versions of ALBERT models for language tasks
					• 
				3 items
				• 
				Updated
					
				
Model Description: This model is a Albert fine-tuned on MPRC dynamically quantized with optimum-intel through the usage of huggingface/optimum-intel through the usage of Intel® Neural Compressor.
To load the quantized model, you can do as follows:
from optimum.intel import INCModelForSequenceClassification
model = INCModelForSequenceClassification.from_pretrained("Intel/albert-base-v2-MRPC-int8")
| INT8 | FP32 | |
|---|---|---|
| Accuracy (eval-f1) | 0.9193 | 0.9263 | 
| Model size (MB) | 45.0 | 46.7 |