chunk_id
stringlengths 44
45
| chunk_content
stringlengths 21
448
| filename
stringlengths 36
36
|
|---|---|---|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_20
|
ct_embs, axis=1)
def get_loss(cosine_score, labels):
return torch.mean(torch.square(labels * (1 - cosine_score) + torch.clamp((1 - labels) * cosine_score, min=0.0)))
The get_cosine_embeddings function computes the cosine similarity and the get_loss function computes the loss. The loss enables the model to learn that a cosine score of 1 for query and product pairs is relevant, and a cosine score of 0 or below is irrelevant.
Define the Peft
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_21
|
hat a cosine score of 1 for query and product pairs is relevant, and a cosine score of 0 or below is irrelevant.
Define the PeftConfig with your LoRA hyperparameters, and create a PeftModel. We use π€ Accelerate for handling all device management, mixed precision training, gradient accumulation, WandB tracking, and saving/loading utilities.
Results
The table below compares the training time, the batch size that could be fit in Colab, and the
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_22
|
ng/loading utilities.
Results
The table below compares the training time, the batch size that could be fit in Colab, and the best ROC-AUC scores between a PEFT model and a fully fine-tuned model:
Training Type
Training time per epoch (Hrs)
Batch Size that fits
ROC-AUC score (higher is better)
Pre-Trained e5-large-v2
-
-
0.68
PEFT
1.73
64
0.787
Full Fine-Tuning
2.33
32
0.7969
The PEFT-LoRA model trains 1.35X faster and can fit 2X batch size c
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_23
|
-
-
0.68
PEFT
1.73
64
0.787
Full Fine-Tuning
2.33
32
0.7969
The PEFT-LoRA model trains 1.35X faster and can fit 2X batch size compared to the fully fine-tuned model, and the performance of PEFT-LoRA is comparable to the fully fine-tuned model with a relative drop of -1.24% in ROC-AUC. This gap can probably be closed with bigger models as mentioned in The Power of Scale for Parameter-Efficient Prompt Tuning
.
Inference
Letβs go! Now we have
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_24
|
ith bigger models as mentioned in The Power of Scale for Parameter-Efficient Prompt Tuning
.
Inference
Letβs go! Now we have the model, we need to create a search index of all the products in our catalog.
Please refer to peft_lora_embedding_semantic_similarity_inference.ipynb for the complete inference code.
Get a list of ids to products which we can call ids_to_products_dict:
Copied
{0: 'RamPro 10" All Purpose Utility Air Tires/Wheels w
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_25
|
list of ids to products which we can call ids_to_products_dict:
Copied
{0: 'RamPro 10" All Purpose Utility Air Tires/Wheels with a 5/8" Diameter Hole with Double Sealed Bearings (Pack of 2)',
1: 'MaxAuto 2-Pack 13x5.00-6 2PLY Turf Mower Tractor Tire with Yellow Rim, (3" Centered Hub, 3/4" Bushings )',
2: 'NEIKO 20601A 14.5 inch Steel Tire Spoon Lever Iron Tool Kit | Professional Tire Changing Tool for Motorcycle, Dirt Bike, Lawn Mower | 3
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_26
|
601A 14.5 inch Steel Tire Spoon Lever Iron Tool Kit | Professional Tire Changing Tool for Motorcycle, Dirt Bike, Lawn Mower | 3 pcs Tire Spoons | 3 Rim Protector | Valve Tool | 6 Valve Cores',
3: '2PK 13x5.00-6 13x5.00x6 13x5x6 13x5-6 2PLY Turf Mower Tractor Tire with Gray Rim',
4: '(Set of 2) 15x6.00-6 Husqvarna/Poulan Tire Wheel Assy .75" Bearing',
5: 'MaxAuto 2 Pcs 16x6.50-8 Lawn Mower Tire for Garden Tractors Ridings, 4PR, Tubeless',
6:
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_27
|
lan Tire Wheel Assy .75" Bearing',
5: 'MaxAuto 2 Pcs 16x6.50-8 Lawn Mower Tire for Garden Tractors Ridings, 4PR, Tubeless',
6: 'Dr.Roc Tire Spoon Lever Dirt Bike Lawn Mower Motorcycle Tire Changing Tools with Durable Bag 3 Tire Irons 2 Rim Protectors 1 Valve Stems Set TR412 TR413',
7: 'MARASTAR 21446-2PK 15x6.00-6" Front Tire Assembly Replacement-Craftsman Mower, Pack of 2',
8: '15x6.00-6" Front Tire Assembly Replacement for 100 and 300 Ser
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_28
|
Front Tire Assembly Replacement-Craftsman Mower, Pack of 2',
8: '15x6.00-6" Front Tire Assembly Replacement for 100 and 300 Series John Deere Riding Mowers - 2 pack',
9: 'Honda HRR Wheel Kit (2 Front 44710-VL0-L02ZB, 2 Back 42710-VE2-M02ZE)',
10: 'Honda 42710-VE2-M02ZE (Replaces 42710-VE2-M01ZE) Lawn Mower Rear Wheel Set of 2' ...
Use the trained smangrul/peft_lora_e5_ecommerce_semantic_search_colab model to get the product embeddings:
Co
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_29
|
l Set of 2' ...
Use the trained smangrul/peft_lora_e5_ecommerce_semantic_search_colab model to get the product embeddings:
Copied
# base model
model = AutoModelForSentenceEmbedding(model_name_or_path, tokenizer)
# peft config and wrapping
model = PeftModel.from_pretrained(model, peft_model_id)
device = "cuda"
model.to(device)
model.eval()
model = model.merge_and_unload()
import numpy as np
num_products= len(dataset)
d = 1024
product_embe
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_30
|
l.to(device)
model.eval()
model = model.merge_and_unload()
import numpy as np
num_products= len(dataset)
d = 1024
product_embeddings_array = np.zeros((num_products, d))
for step, batch in enumerate(tqdm(dataloader)):
with torch.no_grad():
with torch.amp.autocast(dtype=torch.bfloat16, device_type="cuda"):
product_embs = model(**{k:v.to(device) for k, v in batch.items()}).detach().float().cpu()
start_index = step*bat
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_31
|
product_embs = model(**{k:v.to(device) for k, v in batch.items()}).detach().float().cpu()
start_index = step*batch_size
end_index = start_index+batch_size if (start_index+batch_size) < num_products else num_products
product_embeddings_array[start_index:end_index] = product_embs
del product_embs, batch
Create a search index using HNSWlib:
Copied
def construct_search_index(dim, num_elements, data):
# Declaring
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_32
|
embs, batch
Create a search index using HNSWlib:
Copied
def construct_search_index(dim, num_elements, data):
# Declaring index
search_index = hnswlib.Index(space = 'ip', dim = dim) # possible options are l2, cosine or ip
# Initializing index - the maximum number of elements should be known beforehand
search_index.init_index(max_elements = num_elements, ef_construction = 200, M = 100)
# Element insertion (can be call
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_33
|
d
search_index.init_index(max_elements = num_elements, ef_construction = 200, M = 100)
# Element insertion (can be called several times):
ids = np.arange(num_elements)
search_index.add_items(data, ids)
return search_index
product_search_index = construct_search_index(d, num_products, product_embeddings_array)
Get the query embeddings and nearest neighbors:
Copied
def get_query_embeddings(query, model, tokenizer, device
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_34
|
ddings_array)
Get the query embeddings and nearest neighbors:
Copied
def get_query_embeddings(query, model, tokenizer, device):
inputs = tokenizer(query, padding="max_length", max_length=70, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
query_embs = model(**{k:v.to(device) for k, v in inputs.items()}).detach().cpu()
return query_embs[0]
def get_nearest_neighbours(k, search_index, query
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_35
|
ce) for k, v in inputs.items()}).detach().cpu()
return query_embs[0]
def get_nearest_neighbours(k, search_index, query_embeddings, ids_to_products_dict, threshold=0.7):
# Controlling the recall by setting ef:
search_index.set_ef(100) # ef should always be > k
# Query dataset, k - number of the closest elements (returns 2 numpy arrays)
labels, distances = search_index.knn_query(query_embeddings, k = k)
return
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_36
|
osest elements (returns 2 numpy arrays)
labels, distances = search_index.knn_query(query_embeddings, k = k)
return [(ids_to_products_dict[label], (1-distance)) for label, distance in zip(labels[0], distances[0]) if (1-distance)>=threshold]
Letβs test it out with the query deep learning books:
Copied
query = "deep learning books"
k = 10
query_embeddings = get_query_embeddings(query, model, tokenizer, device)
search_results = get_
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_37
|
ry = "deep learning books"
k = 10
query_embeddings = get_query_embeddings(query, model, tokenizer, device)
search_results = get_nearest_neighbours(k, product_search_index, query_embeddings, ids_to_products_dict, threshold=0.7)
print(f"{query=}")
for product, cosine_sim_score in search_results:
print(f"cosine_sim_score={round(cosine_sim_score,2)} {product=}")
Output:
Copied
query='deep learning books'
cosine_sim_score=0.95 product='Deep
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_38
|
score={round(cosine_sim_score,2)} {product=}")
Output:
Copied
query='deep learning books'
cosine_sim_score=0.95 product='Deep Learning (The MIT Press Essential Knowledge series)'
cosine_sim_score=0.93 product='Practical Deep Learning: A Python-Based Introduction'
cosine_sim_score=0.9 product='Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems'
cosine_sim_score=0.9 product=
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_39
|
ng with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems'
cosine_sim_score=0.9 product='Machine Learning: A Hands-On, Project-Based Introduction to Machine Learning for Absolute Beginners: Mastering Engineering ML Systems using Scikit-Learn and TensorFlow'
cosine_sim_score=0.9 product='Mastering Machine Learning on AWS: Advanced machine learning in Python using SageMaker, Apache Spark, and TensorFlow'
co
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_40
|
roduct='Mastering Machine Learning on AWS: Advanced machine learning in Python using SageMaker, Apache Spark, and TensorFlow'
cosine_sim_score=0.9 product='The Hundred-Page Machine Learning Book'
cosine_sim_score=0.89 product='Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems'
cosine_sim_score=0.89 product='Machine Learning: A Journey from Beginner to Advanced Includ
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_41
|
niques to Build Intelligent Systems'
cosine_sim_score=0.89 product='Machine Learning: A Journey from Beginner to Advanced Including Deep Learning, Scikit-learn and Tensorflow'
cosine_sim_score=0.88 product='Mastering Machine Learning with scikit-learn'
cosine_sim_score=0.88 product='Mastering Machine Learning with scikit-learn - Second Edition: Apply effective learning algorithms to real-world problems using scikit-learn'
Books on deep learning
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_42
|
it-learn - Second Edition: Apply effective learning algorithms to real-world problems using scikit-learn'
Books on deep learning and machine learning are retrieved even though machine learning wasnβt included in the query. This means the model has learned that these books are semantically relevant to the query based on the purchase behavior of customers on Amazon.
The next steps would ideally involve using ONNX/TensorRT to optimize the model a
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
5c1fa8dadc86e3a282900d1df405f9cb.txt_chunk_43
|
the purchase behavior of customers on Amazon.
The next steps would ideally involve using ONNX/TensorRT to optimize the model and using a Triton server to host it. Check out π€ Optimum for related optimizations for efficient serving!
|
5c1fa8dadc86e3a282900d1df405f9cb.txt
|
d64ef94a794ccb915cc97112ec270ab9.txt_chunk_1
|
PEFT
π€ PEFT, or Parameter-Efficient Fine-Tuning (PEFT), is a library for efficiently adapting pre-trained language models (PLMs) to various downstream applications without fine-tuning all the modelβs parameters.
PEFT methods only fine-tune a small number of (extra) model parameters, significantly decreasing computational and storage costs because fine-tuning large-scale PLMs is prohibitively costly.
Recent state-of-the-art PEFT techniques ach
|
d64ef94a794ccb915cc97112ec270ab9.txt
|
d64ef94a794ccb915cc97112ec270ab9.txt_chunk_2
|
onal and storage costs because fine-tuning large-scale PLMs is prohibitively costly.
Recent state-of-the-art PEFT techniques achieve performance comparable to that of full fine-tuning.
PEFT is seamlessly integrated with π€ Accelerate for large-scale models leveraging DeepSpeed and Big Model Inference.
Get started
Start here if you're new to π€ PEFT to get an overview of the library's main features, and how to train a model with a PEFT method.
How
|
d64ef94a794ccb915cc97112ec270ab9.txt
|
d64ef94a794ccb915cc97112ec270ab9.txt_chunk_3
|
here if you're new to π€ PEFT to get an overview of the library's main features, and how to train a model with a PEFT method.
How-to guides
Practical guides demonstrating how to apply various PEFT methods across different types of tasks like image classification, causal language modeling, automatic speech recognition, and more. Learn how to use π€ PEFT with the DeepSpeed and Fully Sharded Data Parallel scripts.
Conceptual guides
Get a better theo
|
d64ef94a794ccb915cc97112ec270ab9.txt
|
d64ef94a794ccb915cc97112ec270ab9.txt_chunk_4
|
nd more. Learn how to use π€ PEFT with the DeepSpeed and Fully Sharded Data Parallel scripts.
Conceptual guides
Get a better theoretical understanding of how LoRA and various soft prompting methods help reduce the number of trainable parameters to make training more efficient.
Reference
Technical descriptions of how π€ PEFT classes and methods work.
Supported methods
LoRA: LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS
Prefix Tuning: Prefi
|
d64ef94a794ccb915cc97112ec270ab9.txt
|
d64ef94a794ccb915cc97112ec270ab9.txt_chunk_5
|
EFT classes and methods work.
Supported methods
LoRA: LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS
Prefix Tuning: Prefix-Tuning: Optimizing Continuous Prompts for Generation, P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
P-Tuning: GPT Understands, Too
Prompt Tuning: The Power of Scale for Parameter-Efficient Prompt Tuning
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-
|
d64ef94a794ccb915cc97112ec270ab9.txt
|
d64ef94a794ccb915cc97112ec270ab9.txt_chunk_6
|
ning: The Power of Scale for Parameter-Efficient Prompt Tuning
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
IA3: Infused Adapter by Inhibiting and Amplifying Inner Activations
Supported models
The tables provided below list the PEFT methods and models supported for each task. To apply a particular PEFT method for
a task, please refer t
|
d64ef94a794ccb915cc97112ec270ab9.txt
|
d64ef94a794ccb915cc97112ec270ab9.txt_chunk_7
|
ded below list the PEFT methods and models supported for each task. To apply a particular PEFT method for
a task, please refer to the corresponding Task guides.
Causal Language Modeling
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
GPT-2
β
β
β
β
β
Bloom
β
β
β
β
β
OPT
β
β
β
β
β
GPT-Neo
β
β
β
β
β
GPT-J
β
β
β
β
β
GPT-NeoX-20B
β
β
β
β
β
LLaMA
β
β
β
β
β
ChatGLM
β
β
β
β
β
Conditional Generation
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tun
|
d64ef94a794ccb915cc97112ec270ab9.txt
|
d64ef94a794ccb915cc97112ec270ab9.txt_chunk_8
|
GPT-NeoX-20B
β
β
β
β
β
LLaMA
β
β
β
β
β
ChatGLM
β
β
β
β
β
Conditional Generation
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
T5
β
β
β
β
β
BART
β
β
β
β
β
Sequence Classification
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
BERT
β
β
β
β
β
RoBERTa
β
β
β
β
β
GPT-2
β
β
β
β
Bloom
β
β
β
β
OPT
β
β
β
β
GPT-Neo
β
β
β
β
GPT-J
β
β
β
β
Deberta
β
β
β
Deberta-v2
β
β
β
Token Classification
Model
LoRA
Prefix Tuning
P-Tuning
Prom
|
d64ef94a794ccb915cc97112ec270ab9.txt
|
d64ef94a794ccb915cc97112ec270ab9.txt_chunk_9
|
PT-Neo
β
β
β
β
GPT-J
β
β
β
β
Deberta
β
β
β
Deberta-v2
β
β
β
Token Classification
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
BERT
β
β
RoBERTa
β
β
GPT-2
β
β
Bloom
β
β
OPT
β
β
GPT-Neo
β
β
GPT-J
β
β
Deberta
β
Deberta-v2
β
Text-to-Image Generation
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
Stable Diffusion
β
Image Classification
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
ViT
|
d64ef94a794ccb915cc97112ec270ab9.txt
|
d64ef94a794ccb915cc97112ec270ab9.txt_chunk_10
|
Tuning
Prompt Tuning
IA3
Stable Diffusion
β
Image Classification
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
ViT
β
Swin
β
Image to text (Multi-modal models)
We have tested LoRA for ViT and Swin for fine-tuning on image classification.
However, it should be possible to use LoRA for any ViT-based model from π€ Transformers.
Check out the Image classification task guide to learn more. If you run into problems, please op
|
d64ef94a794ccb915cc97112ec270ab9.txt
|
d64ef94a794ccb915cc97112ec270ab9.txt_chunk_11
|
ased model from π€ Transformers.
Check out the Image classification task guide to learn more. If you run into problems, please open an issue.
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
Blip-2
β
Semantic Segmentation
As with image-to-text models, you should be able to apply LoRA to any of the segmentation models.
Itβs worth noting that we havenβt tested this with every architecture yet. Therefore, if you come across any issues, ki
|
d64ef94a794ccb915cc97112ec270ab9.txt
|
d64ef94a794ccb915cc97112ec270ab9.txt_chunk_12
|
models.
Itβs worth noting that we havenβt tested this with every architecture yet. Therefore, if you come across any issues, kindly create an issue report.
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
SegFormer
β
|
d64ef94a794ccb915cc97112ec270ab9.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_1
|
DeepSpeed
DeepSpeed is a library designed for speed and scale for distributed training of large models with billions of parameters. At its core is the Zero Redundancy Optimizer (ZeRO) that shards optimizer states (ZeRO-1), gradients (ZeRO-2), and parameters (ZeRO-3) across data parallel processes. This drastically reduces memory usage, allowing you to scale your training to billion parameter models. To unlock even more memory efficiency, ZeRO
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_2
|
duces memory usage, allowing you to scale your training to billion parameter models. To unlock even more memory efficiency, ZeRO-Offload reduces GPU compute and memory by leveraging CPU resources during optimization.
Both of these features are supported in π€ Accelerate, and you can use them with π€ PEFT. This guide will help you learn how to use our DeepSpeed training script. Youβll configure the script to train a large model for conditional gen
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_3
|
help you learn how to use our DeepSpeed training script. Youβll configure the script to train a large model for conditional generation with ZeRO-3 and ZeRO-Offload.
π‘ To help you get started, check out our example training scripts for causal language modeling and conditional generation. You can adapt these scripts for your own applications or even use them out of the box if your task is similar to the one in the scripts.
Configuration
Start
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_4
|
your own applications or even use them out of the box if your task is similar to the one in the scripts.
Configuration
Start by running the following command to create a DeepSpeed configuration file with π€ Accelerate. The --config_file flag allows you to save the configuration file to a specific location, otherwise it is saved as a default_config.yaml file in the π€ Accelerate cache.
The configuration file is used to set the default options
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_5
|
it is saved as a default_config.yaml file in the π€ Accelerate cache.
The configuration file is used to set the default options when you launch the training script.
Copied
accelerate config --config_file ds_zero3_cpu.yaml
Youβll be asked a few questions about your setup, and configure the following arguments. In this example, youβll use ZeRO-3 and ZeRO-Offload so make sure you pick those options.
Copied
`zero_stage`: [0] Disabled, [1] opt
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_6
|
s example, youβll use ZeRO-3 and ZeRO-Offload so make sure you pick those options.
Copied
`zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning
`gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them.
`gradient_clipping`: Enable gradient clipping with value.
`offload_optimizer_device`:
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_7
|
dients before averaging and applying them.
`gradient_clipping`: Enable gradient clipping with value.
`offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2.
`offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3.
`zero3
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_8
|
er offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3.
`zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3.
`zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3.
`mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16`
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_9
|
weights when using ZeRO Stage-3.
`mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training.
An example configuration file might look like the following. The most important thing to notice is that zero_stage is set to 3, and offload_optimizer_device and offload_param_device are set to the cpu.
Copied
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumula
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_10
|
and offload_param_device are set to the cpu.
Copied
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
distributed_type: DEEPSPEED
downcast_bf16: 'no'
dynamo_backend: 'NO'
fsdp_config: {}
machine_rank: 0
main_training_function: main
megatron_lm_config:
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_11
|
SPEED
downcast_bf16: 'no'
dynamo_backend: 'NO'
fsdp_config: {}
machine_rank: 0
main_training_function: main
megatron_lm_config: {}
mixed_precision: 'no'
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
use_cpu: false
The important parts
Letβs dive a little deeper into the script so you can see whatβs going on, and understand how it works.
Within the main function, the script creates an Accelerator class to initialize
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_12
|
e whatβs going on, and understand how it works.
Within the main function, the script creates an Accelerator class to initialize all the necessary requirements for distributed training.
π‘ Feel free to change the model and dataset inside the main function. If your dataset format is different from the one in the script, you may also need to write your own preprocessing function.
The script also creates a configuration for the π€ PEFT method youβre
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_13
|
ou may also need to write your own preprocessing function.
The script also creates a configuration for the π€ PEFT method youβre using, which in this case, is LoRA. The LoraConfig specifies the task type and important parameters such as the dimension of the low-rank matrices, the matrices scaling factor, and the dropout probability of the LoRA layers. If you want to use a different π€ PEFT method, make sure you replace LoraConfig with the approp
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_14
|
t probability of the LoRA layers. If you want to use a different π€ PEFT method, make sure you replace LoraConfig with the appropriate class.
Copied
def main():
+ accelerator = Accelerator()
model_name_or_path = "facebook/bart-large"
dataset_name = "twitter_complaints"
+ peft_config = LoraConfig(
task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1
)
Throughout the script,
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_15
|
task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1
)
Throughout the script, youβll see the main_process_first and wait_for_everyone functions which help control and synchronize when processes are executed.
The get_peft_model() function takes a base model and the peft_config you prepared earlier to create a PeftModel:
Copied
model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_16
|
config you prepared earlier to create a PeftModel:
Copied
model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
+ model = get_peft_model(model, peft_config)
Pass all the relevant training objects to π€ Accelerateβs prepare which makes sure everything is ready for training:
Copied
model, train_dataloader, eval_dataloader, test_dataloader, optimizer, lr_scheduler = accelerator.prepare(
model, train_dataloader, eval_dataload
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_17
|
der, eval_dataloader, test_dataloader, optimizer, lr_scheduler = accelerator.prepare(
model, train_dataloader, eval_dataloader, test_dataloader, optimizer, lr_scheduler
)
The next bit of code checks whether the DeepSpeed plugin is used in the Accelerator, and if the plugin exists, then the Accelerator uses ZeRO-3 as specified in the configuration file:
Copied
is_ds_zero_3 = False
if getattr(accelerator.state, "deepspeed_plugin", None):
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_18
|
s specified in the configuration file:
Copied
is_ds_zero_3 = False
if getattr(accelerator.state, "deepspeed_plugin", None):
is_ds_zero_3 = accelerator.state.deepspeed_plugin.zero_stage == 3
Inside the training loop, the usual loss.backward() is replaced by π€ Accelerateβs backward which uses the correct backward() method based on your configuration:
Copied
for epoch in range(num_epochs):
with TorchTracemalloc() as tracemalloc:
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_19
|
) method based on your configuration:
Copied
for epoch in range(num_epochs):
with TorchTracemalloc() as tracemalloc:
model.train()
total_loss = 0
for step, batch in enumerate(tqdm(train_dataloader)):
outputs = model(**batch)
loss = outputs.loss
total_loss += loss.detach().float()
+ accelerator.backward(loss)
optimizer.step()
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_20
|
total_loss += loss.detach().float()
+ accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
That is all! The rest of the script handles the training loop, evaluation, and even pushes it to the Hub for you.
Train
Run the following command to launch the training script. Earlier, you saved the configuration file to ds_zero3_cpu.yaml, so youβll need to
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_21
|
lowing command to launch the training script. Earlier, you saved the configuration file to ds_zero3_cpu.yaml, so youβll need to pass the path to the launcher with the --config_file argument like this:
Copied
accelerate launch --config_file ds_zero3_cpu.yaml examples/peft_lora_seq2seq_accelerate_ds_zero3_offload.py
Youβll see some output logs that track memory usage during training, and once itβs completed, the script returns the accuracy and
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_22
|
ouβll see some output logs that track memory usage during training, and once itβs completed, the script returns the accuracy and compares the predictions to the labels:
Copied
GPU Memory before entering the train : 1916
GPU Memory consumed at the end of the train (end-begin): 66
GPU Peak Memory consumed during the train (max-begin): 7488
GPU Total Peak Memory consumed during the train (max): 9404
CPU Memory before entering the train : 19411
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_23
|
rain (max-begin): 7488
GPU Total Peak Memory consumed during the train (max): 9404
CPU Memory before entering the train : 19411
CPU Memory consumed at the end of the train (end-begin): 0
CPU Peak Memory consumed during the train (max-begin): 0
CPU Total Peak Memory consumed during the train (max): 19411
epoch=4: train_ppl=tensor(1.0705, device='cuda:0') train_epoch_loss=tensor(0.0681, device='cuda:0')
100%|ββββββββββββββββββββββββββββββββββββββ
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_24
|
ppl=tensor(1.0705, device='cuda:0') train_epoch_loss=tensor(0.0681, device='cuda:0')
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7/7 [00:27<00:00, 3.92s/it]
GPU Memory before entering the eval : 1982
GPU Memory consumed at the end of the eval (end-begin): -66
GPU Peak Memory consumed during the eval (max-begin): 672
GPU Total Peak Memory consumed during the eval (max): 2654
CPU Memory befo
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_25
|
Peak Memory consumed during the eval (max-begin): 672
GPU Total Peak Memory consumed during the eval (max): 2654
CPU Memory before entering the eval : 19411
CPU Memory consumed at the end of the eval (end-begin): 0
CPU Peak Memory consumed during the eval (max-begin): 0
CPU Total Peak Memory consumed during the eval (max): 19411
accuracy=100.0
eval_preds[:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complai
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_26
|
ax): 19411
accuracy=100.0
eval_preds[:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint']
dataset['train'][label_column][:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint']
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b6e9b6939c8a71016ca6ac298940a05e.txt_chunk_27
|
o complaint', 'complaint', 'complaint', 'no complaint']
|
b6e9b6939c8a71016ca6ac298940a05e.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_1
|
Image classification using LoRA
This guide demonstrates how to use LoRA, a low-rank approximation technique, to fine-tune an image classification model.
By using LoRA from π€ PEFT, we can reduce the number of trainable parameters in the model to only 0.77% of the original.
LoRA achieves this reduction by adding low-rank βupdate matricesβ to specific blocks of the model, such as the attention
blocks. During fine-tuning, only these matrices are
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_2
|
nk βupdate matricesβ to specific blocks of the model, such as the attention
blocks. During fine-tuning, only these matrices are trained, while the original model parameters are left unchanged.
At inference time, the update matrices are merged with the original model parameters to produce the final classification result.
For more information on LoRA, please refer to the original LoRA paper.
Install dependencies
Install the libraries required
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_3
|
.
For more information on LoRA, please refer to the original LoRA paper.
Install dependencies
Install the libraries required for model training:
Copied
!pip install transformers accelerate evaluate datasets peft -q
Check the versions of all required libraries to make sure you are up to date:
Copied
import transformers
import accelerate
import peft
print(f"Transformers version: {transformers.__version__}")
print(f"Accelerate version: {
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_4
|
sformers
import accelerate
import peft
print(f"Transformers version: {transformers.__version__}")
print(f"Accelerate version: {accelerate.__version__}")
print(f"PEFT version: {peft.__version__}")
"Transformers version: 4.27.4"
"Accelerate version: 0.18.0"
"PEFT version: 0.2.0"
Authenticate to share your model
To share the fine-tuned model at the end of the training with the community, authenticate using your π€ token.
You can obtain your tok
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_5
|
are the fine-tuned model at the end of the training with the community, authenticate using your π€ token.
You can obtain your token from your account settings.
Copied
from huggingface_hub import notebook_login
notebook_login()
Select a model checkpoint to fine-tune
Choose a model checkpoint from any of the model architectures supported for image classification. When in doubt, refer to
the image classification task guide in
π€ Transformers
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_6
|
architectures supported for image classification. When in doubt, refer to
the image classification task guide in
π€ Transformers documentation.
Copied
model_checkpoint = "google/vit-base-patch16-224-in21k"
Load a dataset
To keep this exampleβs runtime short, letβs only load the first 5000 instances from the training set of the Food-101 dataset:
Copied
from datasets import load_dataset
dataset = load_dataset("food101", split="train[:500
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_7
|
ng set of the Food-101 dataset:
Copied
from datasets import load_dataset
dataset = load_dataset("food101", split="train[:5000]")
Dataset preparation
To prepare the dataset for training and evaluation, create label2id and id2label dictionaries. These will come in
handy when performing inference and for metadata information:
Copied
labels = dataset.features["label"].names
label2id, id2label = dict(), dict()
for i, label in enumerate(lab
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_8
|
nformation:
Copied
labels = dataset.features["label"].names
label2id, id2label = dict(), dict()
for i, label in enumerate(labels):
label2id[label] = i
id2label[i] = label
id2label[2]
"baklava"
Next, load the image processor of the model youβre fine-tuning:
Copied
from transformers import AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained(model_checkpoint)
The image_processor contains useful information on wh
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_9
|
or
image_processor = AutoImageProcessor.from_pretrained(model_checkpoint)
The image_processor contains useful information on which size the training and evaluation images should be resized
to, as well as values that should be used to normalize the pixel values. Using the image_processor, prepare transformation
functions for the datasets. These functions will include data augmentation and pixel scaling:
Copied
from torchvision.transforms imp
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_10
|
ns for the datasets. These functions will include data augmentation and pixel scaling:
Copied
from torchvision.transforms import (
CenterCrop,
Compose,
Normalize,
RandomHorizontalFlip,
RandomResizedCrop,
Resize,
ToTensor,
)
normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std)
train_transforms = Compose(
[
RandomResizedCrop(image_processor.size["height"]),
Rando
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_11
|
_processor.image_std)
train_transforms = Compose(
[
RandomResizedCrop(image_processor.size["height"]),
RandomHorizontalFlip(),
ToTensor(),
normalize,
]
)
val_transforms = Compose(
[
Resize(image_processor.size["height"]),
CenterCrop(image_processor.size["height"]),
ToTensor(),
normalize,
]
)
def preprocess_train(example_batch):
"""Apply train_transforms acros
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_12
|
ht"]),
ToTensor(),
normalize,
]
)
def preprocess_train(example_batch):
"""Apply train_transforms across a batch."""
example_batch["pixel_values"] = [train_transforms(image.convert("RGB")) for image in example_batch["image"]]
return example_batch
def preprocess_val(example_batch):
"""Apply val_transforms across a batch."""
example_batch["pixel_values"] = [val_transforms(image.convert("RGB")) for image
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_13
|
"""Apply val_transforms across a batch."""
example_batch["pixel_values"] = [val_transforms(image.convert("RGB")) for image in example_batch["image"]]
return example_batch
Split the dataset into training and validation sets:
Copied
splits = dataset.train_test_split(test_size=0.1)
train_ds = splits["train"]
val_ds = splits["test"]
Finally, set the transformation functions for the datasets accordingly:
Copied
train_ds.set_transform(
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_14
|
al_ds = splits["test"]
Finally, set the transformation functions for the datasets accordingly:
Copied
train_ds.set_transform(preprocess_train)
val_ds.set_transform(preprocess_val)
Load and prepare a model
Before loading the model, letβs define a helper function to check the total number of parameters a model has, as well
as how many of them are trainable.
Copied
def print_trainable_parameters(model):
trainable_params = 0
all_pa
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_15
|
as well
as how many of them are trainable.
Copied
def print_trainable_parameters(model):
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param:.2f}"
)
Itβs
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_16
|
nable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param:.2f}"
)
Itβs important to initialize the original model correctly as it will be used as a base to create the PeftModel youβll
actually fine-tune. Specify the label2id and id2label so that AutoModelForImageClassification can append a classification
head to the underlying model, adapted for this dataset. You should see the following
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_17
|
Classification can append a classification
head to the underlying model, adapted for this dataset. You should see the following output:
Copied
Some weights of ViTForImageClassification were not initialized from the model checkpoint at google/vit-base-patch16-224-in21k and are newly initialized: ['classifier.weight', 'classifier.bias']
Copied
from transformers import AutoModelForImageClassification, TrainingArguments, Trainer
model = Auto
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_18
|
'classifier.bias']
Copied
from transformers import AutoModelForImageClassification, TrainingArguments, Trainer
model = AutoModelForImageClassification.from_pretrained(
model_checkpoint,
label2id=label2id,
id2label=id2label,
ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint
)
Before creating a PeftModel, you can check the number of trainable parameters in the
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_19
|
ne-tune an already fine-tuned checkpoint
)
Before creating a PeftModel, you can check the number of trainable parameters in the original model:
Copied
print_trainable_parameters(model)
"trainable params: 85876325 || all params: 85876325 || trainable%: 100.00"
Next, use get_peft_model to wrap the base model so that βupdateβ matrices are added to the respective places.
Copied
from peft import LoraConfig, get_peft_model
config = LoraConfig(
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_20
|
updateβ matrices are added to the respective places.
Copied
from peft import LoraConfig, get_peft_model
config = LoraConfig(
r=16,
lora_alpha=16,
target_modules=["query", "value"],
lora_dropout=0.1,
bias="none",
modules_to_save=["classifier"],
)
lora_model = get_peft_model(model, config)
print_trainable_parameters(lora_model)
"trainable params: 667493 || all params: 86466149 || trainable%: 0.77"
Letβs unpack whatβs g
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_21
|
nt_trainable_parameters(lora_model)
"trainable params: 667493 || all params: 86466149 || trainable%: 0.77"
Letβs unpack whatβs going on here.
To use LoRA, you need to specify the target modules in LoraConfig so that get_peft_model() knows which modules
inside our model need to be amended with LoRA matrices. In this example, weβre only interested in targeting the query and
value matrices of the attention blocks of the base model. Since the param
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_22
|
mple, weβre only interested in targeting the query and
value matrices of the attention blocks of the base model. Since the parameters corresponding to these matrices are βnamedβ
βqueryβ and βvalueβ respectively, we specify them accordingly in the target_modules argument of LoraConfig.
We also specify modules_to_save. After wrapping the base model with get_peft_model() along with the config, we get
a new model where only the LoRA parameters are
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_23
|
fter wrapping the base model with get_peft_model() along with the config, we get
a new model where only the LoRA parameters are trainable (so-called βupdate matricesβ) while the pre-trained parameters
are kept frozen. However, we want the classifier parameters to be trained too when fine-tuning the base model on our
custom dataset. To ensure that the classifier parameters are also trained, we specify modules_to_save. This also
ensures that thes
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_24
|
stom dataset. To ensure that the classifier parameters are also trained, we specify modules_to_save. This also
ensures that these modules are serialized alongside the LoRA trainable parameters when using utilities like save_pretrained()
and push_to_hub().
Hereβs what the other parameters mean:
r: The dimension used by the LoRA update matrices.
alpha: Scaling factor.
bias: Specifies if the bias parameters should be trained. None denotes none of
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_25
|
the LoRA update matrices.
alpha: Scaling factor.
bias: Specifies if the bias parameters should be trained. None denotes none of the bias parameters will be trained.
r and alpha together control the total number of final trainable parameters when using LoRA, giving you the flexibility
to balance a trade-off between end performance and compute efficiency.
By looking at the number of trainable parameters, you can see how many parameters weβre actu
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_26
|
performance and compute efficiency.
By looking at the number of trainable parameters, you can see how many parameters weβre actually training. Since the goal is
to achieve parameter-efficient fine-tuning, you should expect to see fewer trainable parameters in the lora_model
in comparison to the original model, which is indeed the case here.
Define training arguments
For model fine-tuning, use Trainer. It accepts
several arguments which you c
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_27
|
indeed the case here.
Define training arguments
For model fine-tuning, use Trainer. It accepts
several arguments which you can wrap using TrainingArguments.
Copied
from transformers import TrainingArguments, Trainer
model_name = model_checkpoint.split("/")[-1]
batch_size = 128
args = TrainingArguments(
f"{model_name}-finetuned-lora-food101",
remove_unused_columns=False,
evaluation_strategy="epoch",
save_strategy="epoch
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_28
|
{model_name}-finetuned-lora-food101",
remove_unused_columns=False,
evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=5e-3,
per_device_train_batch_size=batch_size,
gradient_accumulation_steps=4,
per_device_eval_batch_size=batch_size,
fp16=True,
num_train_epochs=5,
logging_steps=10,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
push_to_hub=True,
label_names=[
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_29
|
logging_steps=10,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
push_to_hub=True,
label_names=["labels"],
)
Compared to non-PEFT methods, you can use a larger batch size since there are fewer parameters to train.
You can also set a larger learning rate than the normal (1e-5 for example).
This can potentially also reduce the need to conduct expensive hyperparameter tuning experiments.
Prepare evaluation metric
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_30
|
.
This can potentially also reduce the need to conduct expensive hyperparameter tuning experiments.
Prepare evaluation metric
Copied
import numpy as np
import evaluate
metric = evaluate.load("accuracy")
def compute_metrics(eval_pred):
"""Computes accuracy on a batch of predictions"""
predictions = np.argmax(eval_pred.predictions, axis=1)
return metric.compute(predictions=predictions, references=eval_pred.label_ids)
The comp
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_31
|
rgmax(eval_pred.predictions, axis=1)
return metric.compute(predictions=predictions, references=eval_pred.label_ids)
The compute_metrics function takes a named tuple as input: predictions, which are the logits of the model as Numpy arrays,
and label_ids, which are the ground-truth labels as Numpy arrays.
Define collation function
A collation function is used by Trainer to gather a batch of training and evaluation examples and prepare them
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_32
|
lation function
A collation function is used by Trainer to gather a batch of training and evaluation examples and prepare them in a
format that is acceptable by the underlying model.
Copied
import torch
def collate_fn(examples):
pixel_values = torch.stack([example["pixel_values"] for example in examples])
labels = torch.tensor([example["label"] for example in examples])
return {"pixel_values": pixel_values, "labels": labels}
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_33
|
labels = torch.tensor([example["label"] for example in examples])
return {"pixel_values": pixel_values, "labels": labels}
Train and evaluate
Bring everything together - model, training arguments, data, collation function, etc. Then, start the training!
Copied
trainer = Trainer(
lora_model,
args,
train_dataset=train_ds,
eval_dataset=val_ds,
tokenizer=image_processor,
compute_metrics=compute_metrics,
data_c
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_34
|
train_dataset=train_ds,
eval_dataset=val_ds,
tokenizer=image_processor,
compute_metrics=compute_metrics,
data_collator=collate_fn,
)
train_results = trainer.train()
In just a few minutes, the fine-tuned model shows 96% validation accuracy even on this small
subset of the training dataset.
Copied
trainer.evaluate(val_ds)
{
"eval_loss": 0.14475855231285095,
"eval_accuracy": 0.96,
"eval_runtime": 3.5725,
"eval_s
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_35
|
iner.evaluate(val_ds)
{
"eval_loss": 0.14475855231285095,
"eval_accuracy": 0.96,
"eval_runtime": 3.5725,
"eval_samples_per_second": 139.958,
"eval_steps_per_second": 1.12,
"epoch": 5.0,
}
Share your model and run inference
Once the fine-tuning is done, share the LoRA parameters with the community like so:
Copied
repo_name = f"sayakpaul/{model_name}-finetuned-lora-food101"
lora_model.push_to_hub(repo_name)
When call
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_36
|
nity like so:
Copied
repo_name = f"sayakpaul/{model_name}-finetuned-lora-food101"
lora_model.push_to_hub(repo_name)
When calling push_to_hub on the lora_model, only the LoRA parameters along with any modules specified in modules_to_save
are saved. Take a look at the trained LoRA parameters.
Youβll see that itβs only 2.6 MB! This greatly helps with portability, especially when using a very large model to fine-tune (such as BLOOM).
Next, letβs
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_37
|
2.6 MB! This greatly helps with portability, especially when using a very large model to fine-tune (such as BLOOM).
Next, letβs see how to load the LoRA updated parameters along with our base model for inference. When you wrap a base model
with PeftModel, modifications are done in-place. To mitigate any concerns that might stem from in-place modifications,
initialize the base model just like you did earlier and construct the inference model.
|
b586f643073f7948b3098cace31f8c69.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.