YAML Metadata
Warning:
The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
Model Card for Model ID: AWQ-OPT-6.7B
Model Details
Model Description
This is a model developed based on Facebook's OPT-6.7B, utilizing the Adaptive Weight Quantization (AWQ) method. The model optimizes inference performance and memory usage by merging quantization layers with the original model. It was calibrated using a random sample of 50 entries from the GBS8K dataset, with a sequence length of 512 and a step size of 0.02.
Developed by
jiangchengchengNLP
Funded by [optional]
AWQ work and Meta's model
Shared by [optional]
No one
Model type
Transformer-based language model
Language(s) (NLP)
Supports multiple languages for natural language processing tasks.
License
Apache License 2.0
Finetuned from model [optional]
OPT-6.7B
Model Sources [optional]
- Repository: https://huggingface.co/facebook/opt-6.7b
How to Get Started with the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "jiangchengchengNLP/opt-6.7B-AWQ-8INT"
tokenizer_id="facebook/opt-6.7b"
tokenizer = AutoTokenizer.from_pretrained(tokenizer_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
input_text = "Hello, I'm am conscious and"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
generated_ids = model.generate(input_ids)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Hello, I'm am conscious and aware of my surroundings. I'm not sure what you mean"]
calibration Data
The model was fine-tuned using a random sample of 50 entries from the GBS8K dataset.
Hardware Type
Cloud-based GPUs (e.g., NVIDIA V100 or A100)
Software
PyTorch, Transformers library from Hugging Face
- Downloads last month
- 2
Model tree for jiangchengchengNLP/opt-6.7B-AWQ-8INT
Base model
facebook/opt-6.7b