Upload folder using huggingface_hub
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +1 -0
- peft/.github/ISSUE_TEMPLATE/bug-report.yml +54 -0
- peft/.github/ISSUE_TEMPLATE/feature-request.yml +21 -0
- peft/.github/workflows/build_docker_images.yml +150 -0
- peft/.github/workflows/build_documentation.yml +22 -0
- peft/.github/workflows/build_pr_documentation.yml +19 -0
- peft/.github/workflows/deploy_method_comparison_app.yml +41 -0
- peft/.github/workflows/integrations_tests.yml +86 -0
- peft/.github/workflows/nightly-bnb.yml +249 -0
- peft/.github/workflows/nightly.yml +115 -0
- peft/.github/workflows/stale.yml +34 -0
- peft/.github/workflows/test-docker-build.yml +66 -0
- peft/.github/workflows/tests-main.yml +43 -0
- peft/.github/workflows/tests.yml +133 -0
- peft/.github/workflows/torch_compile_tests.yml +56 -0
- peft/.github/workflows/trufflehog.yml +18 -0
- peft/.github/workflows/upload_pr_documentation.yml +18 -0
- peft/.github/workflows/zizmor.yaml +28 -0
- peft/.github/zizmor.yml +24 -0
- peft/.gitignore +145 -0
- peft/.pre-commit-config.yaml +13 -0
- peft/LICENSE +201 -0
- peft/Makefile +66 -0
- peft/README.md +189 -0
- peft/docker/README.md +8 -0
- peft/docker/peft-cpu/Dockerfile +52 -0
- peft/docker/peft-gpu-bnb-latest/Dockerfile +68 -0
- peft/docker/peft-gpu-bnb-source/Dockerfile +68 -0
- peft/docker/peft-gpu/Dockerfile +70 -0
- peft/docs/Makefile +19 -0
- peft/docs/README.md +267 -0
- peft/docs/source/_config.py +7 -0
- peft/docs/source/_toctree.yml +151 -0
- peft/docs/source/accelerate/deepspeed.md +449 -0
- peft/docs/source/accelerate/fsdp.md +285 -0
- peft/docs/source/conceptual_guides/adapter.md +136 -0
- peft/docs/source/conceptual_guides/ia3.md +68 -0
- peft/docs/source/conceptual_guides/oft.md +165 -0
- peft/docs/source/conceptual_guides/prompting.md +93 -0
- peft/docs/source/developer_guides/checkpoint.md +244 -0
- peft/docs/source/developer_guides/contributing.md +96 -0
- peft/docs/source/developer_guides/custom_models.md +304 -0
- peft/docs/source/developer_guides/lora.md +822 -0
- peft/docs/source/developer_guides/low_level_api.md +148 -0
- peft/docs/source/developer_guides/mixed_models.md +37 -0
- peft/docs/source/developer_guides/model_merging.md +164 -0
- peft/docs/source/developer_guides/quantization.md +294 -0
- peft/docs/source/developer_guides/torch_compile.md +71 -0
- peft/docs/source/developer_guides/troubleshooting.md +458 -0
- peft/docs/source/index.md +49 -0
.gitattributes
CHANGED
|
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
examples/hra_dreambooth/a_purple_qwe_backpack.png filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
examples/hra_dreambooth/a_purple_qwe_backpack.png filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
peft/examples/hra_dreambooth/a_purple_qwe_backpack.png filter=lfs diff=lfs merge=lfs -text
|
peft/.github/ISSUE_TEMPLATE/bug-report.yml
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: "\U0001F41B Bug Report"
|
| 2 |
+
description: Submit a bug report to help us improve the library
|
| 3 |
+
body:
|
| 4 |
+
- type: textarea
|
| 5 |
+
id: system-info
|
| 6 |
+
attributes:
|
| 7 |
+
label: System Info
|
| 8 |
+
description: Please share your relevant system information with us
|
| 9 |
+
placeholder: peft & accelerate & transformers version, platform, python version, ...
|
| 10 |
+
validations:
|
| 11 |
+
required: true
|
| 12 |
+
|
| 13 |
+
- type: textarea
|
| 14 |
+
id: who-can-help
|
| 15 |
+
attributes:
|
| 16 |
+
label: Who can help?
|
| 17 |
+
description: |
|
| 18 |
+
Your issue will be replied to more quickly if you can figure out the right person to tag with @.
|
| 19 |
+
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
|
| 20 |
+
|
| 21 |
+
All issues are read by one of the core maintainers, so if you don't know who to tag, just leave this blank and
|
| 22 |
+
a core maintainer will ping the right person.
|
| 23 |
+
|
| 24 |
+
Please tag fewer than 3 people.
|
| 25 |
+
|
| 26 |
+
Library: @benjaminbossan @githubnemo
|
| 27 |
+
|
| 28 |
+
diffusers integration: @benjaminbossan @sayakpaul
|
| 29 |
+
|
| 30 |
+
Documentation: @stevhliu
|
| 31 |
+
|
| 32 |
+
placeholder: "@Username ..."
|
| 33 |
+
|
| 34 |
+
- type: textarea
|
| 35 |
+
id: reproduction
|
| 36 |
+
validations:
|
| 37 |
+
required: true
|
| 38 |
+
attributes:
|
| 39 |
+
label: Reproduction
|
| 40 |
+
description: |
|
| 41 |
+
Please provide a code sample that reproduces the problem you ran into. It can be a Colab link or just a code snippet.
|
| 42 |
+
Please provide the simplest reproducer as possible so that we can quickly fix the issue. When you paste
|
| 43 |
+
the error message, please include the full traceback.
|
| 44 |
+
|
| 45 |
+
placeholder: |
|
| 46 |
+
Reproducer:
|
| 47 |
+
|
| 48 |
+
- type: textarea
|
| 49 |
+
id: expected-behavior
|
| 50 |
+
validations:
|
| 51 |
+
required: true
|
| 52 |
+
attributes:
|
| 53 |
+
label: Expected behavior
|
| 54 |
+
description: "A clear and concise description of what you would expect to happen."
|
peft/.github/ISSUE_TEMPLATE/feature-request.yml
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: "\U0001F680 Feature request"
|
| 2 |
+
description: Submit a proposal/request for a new feature
|
| 3 |
+
labels: [ "feature" ]
|
| 4 |
+
body:
|
| 5 |
+
- type: textarea
|
| 6 |
+
id: feature-request
|
| 7 |
+
validations:
|
| 8 |
+
required: true
|
| 9 |
+
attributes:
|
| 10 |
+
label: Feature request
|
| 11 |
+
description: |
|
| 12 |
+
A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist.
|
| 13 |
+
|
| 14 |
+
- type: textarea
|
| 15 |
+
id: contribution
|
| 16 |
+
validations:
|
| 17 |
+
required: true
|
| 18 |
+
attributes:
|
| 19 |
+
label: Your contribution
|
| 20 |
+
description: |
|
| 21 |
+
Is there any way that you could help, e.g. by submitting a PR?
|
peft/.github/workflows/build_docker_images.yml
ADDED
|
@@ -0,0 +1,150 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: Build Docker images (scheduled)
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
workflow_dispatch:
|
| 5 |
+
workflow_call:
|
| 6 |
+
schedule:
|
| 7 |
+
- cron: "0 1 * * *"
|
| 8 |
+
|
| 9 |
+
concurrency:
|
| 10 |
+
group: docker-image-builds
|
| 11 |
+
cancel-in-progress: false
|
| 12 |
+
|
| 13 |
+
permissions: {}
|
| 14 |
+
|
| 15 |
+
env:
|
| 16 |
+
CI_SLACK_CHANNEL: ${{ secrets.CI_DOCKER_CHANNEL }}
|
| 17 |
+
|
| 18 |
+
jobs:
|
| 19 |
+
latest-cpu:
|
| 20 |
+
name: "Latest Peft CPU [dev]"
|
| 21 |
+
runs-on:
|
| 22 |
+
group: aws-general-8-plus
|
| 23 |
+
steps:
|
| 24 |
+
- name: Set up Docker Buildx
|
| 25 |
+
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
|
| 26 |
+
- name: Check out code
|
| 27 |
+
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 28 |
+
with:
|
| 29 |
+
persist-credentials: false
|
| 30 |
+
- name: Login to DockerHub
|
| 31 |
+
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
| 32 |
+
with:
|
| 33 |
+
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
| 34 |
+
password: ${{ secrets.DOCKERHUB_PASSWORD }}
|
| 35 |
+
|
| 36 |
+
- name: Build and Push CPU
|
| 37 |
+
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
|
| 38 |
+
with:
|
| 39 |
+
context: ./docker/peft-cpu
|
| 40 |
+
push: true
|
| 41 |
+
tags: huggingface/peft-cpu
|
| 42 |
+
|
| 43 |
+
- name: Post to Slack
|
| 44 |
+
if: always()
|
| 45 |
+
uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 46 |
+
with:
|
| 47 |
+
slack_channel: ${{ env.CI_SLACK_CHANNEL }}
|
| 48 |
+
title: 🤗 Results of the PEFT-CPU docker build
|
| 49 |
+
status: ${{ job.status }}
|
| 50 |
+
slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 51 |
+
|
| 52 |
+
latest-cuda:
|
| 53 |
+
name: "Latest Peft GPU [dev]"
|
| 54 |
+
runs-on:
|
| 55 |
+
group: aws-general-8-plus
|
| 56 |
+
steps:
|
| 57 |
+
- name: Set up Docker Buildx
|
| 58 |
+
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
|
| 59 |
+
- name: Check out code
|
| 60 |
+
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 61 |
+
with:
|
| 62 |
+
persist-credentials: false
|
| 63 |
+
- name: Login to DockerHub
|
| 64 |
+
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
| 65 |
+
with:
|
| 66 |
+
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
| 67 |
+
password: ${{ secrets.DOCKERHUB_PASSWORD }}
|
| 68 |
+
|
| 69 |
+
- name: Build and Push GPU
|
| 70 |
+
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
|
| 71 |
+
with:
|
| 72 |
+
context: ./docker/peft-gpu
|
| 73 |
+
push: true
|
| 74 |
+
tags: huggingface/peft-gpu
|
| 75 |
+
|
| 76 |
+
- name: Post to Slack
|
| 77 |
+
if: always()
|
| 78 |
+
uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 79 |
+
with:
|
| 80 |
+
slack_channel: ${{ env.CI_SLACK_CHANNEL }}
|
| 81 |
+
title: 🤗 Results of the PEFT-GPU docker build
|
| 82 |
+
status: ${{ job.status }}
|
| 83 |
+
slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 84 |
+
|
| 85 |
+
latest-cuda-bnb-source:
|
| 86 |
+
name: "Latest Peft GPU + bnb source [dev]"
|
| 87 |
+
runs-on:
|
| 88 |
+
group: aws-general-8-plus
|
| 89 |
+
steps:
|
| 90 |
+
- name: Set up Docker Buildx
|
| 91 |
+
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
|
| 92 |
+
- name: Check out code
|
| 93 |
+
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 94 |
+
with:
|
| 95 |
+
persist-credentials: false
|
| 96 |
+
- name: Login to DockerHub
|
| 97 |
+
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
| 98 |
+
with:
|
| 99 |
+
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
| 100 |
+
password: ${{ secrets.DOCKERHUB_PASSWORD }}
|
| 101 |
+
|
| 102 |
+
- name: Build and Push GPU
|
| 103 |
+
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
|
| 104 |
+
with:
|
| 105 |
+
context: ./docker/peft-gpu-bnb-source
|
| 106 |
+
push: true
|
| 107 |
+
tags: huggingface/peft-gpu-bnb-source
|
| 108 |
+
|
| 109 |
+
- name: Post to Slack
|
| 110 |
+
if: always()
|
| 111 |
+
uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 112 |
+
with:
|
| 113 |
+
slack_channel: ${{ env.CI_SLACK_CHANNEL }}
|
| 114 |
+
title: 🤗 Results of the PEFT-GPU (bnb source / HF latest) docker build
|
| 115 |
+
status: ${{ job.status }}
|
| 116 |
+
slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 117 |
+
|
| 118 |
+
latest-cuda-bnb-source-latest:
|
| 119 |
+
name: "Latest Peft GPU + bnb source [accelerate / peft / transformers latest]"
|
| 120 |
+
runs-on:
|
| 121 |
+
group: aws-general-8-plus
|
| 122 |
+
steps:
|
| 123 |
+
- name: Set up Docker Buildx
|
| 124 |
+
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
|
| 125 |
+
- name: Check out code
|
| 126 |
+
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 127 |
+
with:
|
| 128 |
+
persist-credentials: false
|
| 129 |
+
- name: Login to DockerHub
|
| 130 |
+
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
| 131 |
+
with:
|
| 132 |
+
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
| 133 |
+
password: ${{ secrets.DOCKERHUB_PASSWORD }}
|
| 134 |
+
|
| 135 |
+
- name: Build and Push GPU
|
| 136 |
+
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
|
| 137 |
+
with:
|
| 138 |
+
context: ./docker/peft-gpu-bnb-latest
|
| 139 |
+
push: true
|
| 140 |
+
tags: huggingface/peft-gpu-bnb-latest
|
| 141 |
+
|
| 142 |
+
- name: Post to Slack
|
| 143 |
+
if: always()
|
| 144 |
+
uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 145 |
+
with:
|
| 146 |
+
slack_channel: ${{ env.CI_SLACK_CHANNEL }}
|
| 147 |
+
title: 🤗 Results of the PEFT-GPU (bnb source / HF source) docker build
|
| 148 |
+
status: ${{ job.status }}
|
| 149 |
+
slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 150 |
+
|
peft/.github/workflows/build_documentation.yml
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: Build documentation
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
push:
|
| 5 |
+
branches:
|
| 6 |
+
- main
|
| 7 |
+
- doc-builder*
|
| 8 |
+
- v*-release
|
| 9 |
+
|
| 10 |
+
permissions: {}
|
| 11 |
+
|
| 12 |
+
jobs:
|
| 13 |
+
build:
|
| 14 |
+
uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@ba4b74d11c46d884a4cf6497687c090f55f027d9 # main from 2025-09-05
|
| 15 |
+
with:
|
| 16 |
+
commit_sha: ${{ github.sha }}
|
| 17 |
+
package: peft
|
| 18 |
+
notebook_folder: peft_docs
|
| 19 |
+
custom_container: huggingface/transformers-doc-builder
|
| 20 |
+
secrets:
|
| 21 |
+
token: ${{ secrets.HUGGINGFACE_PUSH }}
|
| 22 |
+
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
|
peft/.github/workflows/build_pr_documentation.yml
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: Build PR Documentation
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
pull_request:
|
| 5 |
+
|
| 6 |
+
concurrency:
|
| 7 |
+
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
| 8 |
+
cancel-in-progress: true
|
| 9 |
+
|
| 10 |
+
permissions: {}
|
| 11 |
+
|
| 12 |
+
jobs:
|
| 13 |
+
build:
|
| 14 |
+
uses: huggingface/doc-builder/.github/workflows/build_pr_documentation.yml@ba4b74d11c46d884a4cf6497687c090f55f027d9 # main from 2025-09-05
|
| 15 |
+
with:
|
| 16 |
+
commit_sha: ${{ github.event.pull_request.head.sha }}
|
| 17 |
+
pr_number: ${{ github.event.number }}
|
| 18 |
+
package: peft
|
| 19 |
+
custom_container: huggingface/transformers-doc-builder
|
peft/.github/workflows/deploy_method_comparison_app.yml
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: Deploy "method_comparison" Gradio to Spaces
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
push:
|
| 5 |
+
branches: [ main ]
|
| 6 |
+
paths:
|
| 7 |
+
- "method_comparison/**"
|
| 8 |
+
workflow_dispatch:
|
| 9 |
+
|
| 10 |
+
permissions: {}
|
| 11 |
+
|
| 12 |
+
jobs:
|
| 13 |
+
deploy:
|
| 14 |
+
runs-on: ubuntu-latest
|
| 15 |
+
steps:
|
| 16 |
+
- name: Checkout code
|
| 17 |
+
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 18 |
+
with:
|
| 19 |
+
fetch-depth: 0 # full history needed for subtree
|
| 20 |
+
persist-credentials: false
|
| 21 |
+
|
| 22 |
+
- name: Authenticate via ~/.netrc
|
| 23 |
+
env:
|
| 24 |
+
HF_TOKEN: ${{ secrets.PEFT_INTERNAL_REPO_READ_WRITE }}
|
| 25 |
+
run: |
|
| 26 |
+
# netrc needs BOTH login and password entries
|
| 27 |
+
printf "machine huggingface.co\nlogin hf\npassword ${HF_TOKEN}\n" >> ~/.netrc
|
| 28 |
+
chmod 600 ~/.netrc
|
| 29 |
+
|
| 30 |
+
- name: Deploy method_comparison app to HF Spaces
|
| 31 |
+
run: |
|
| 32 |
+
cd method_comparison
|
| 33 |
+
git init
|
| 34 |
+
# Spaces expect requirements.txt
|
| 35 |
+
mv requirements-app.txt requirements.txt
|
| 36 |
+
git config user.name "github-actions[bot]"
|
| 37 |
+
git config user.email "github-actions[bot]@users.noreply.github.com"
|
| 38 |
+
git remote add gradio-app https://huggingface.co/spaces/peft-internal-testing/PEFT-method-comparison
|
| 39 |
+
git add .
|
| 40 |
+
git commit -m "🚀 Deploy method comparison app from GH action"
|
| 41 |
+
git push -f gradio-app HEAD:main
|
peft/.github/workflows/integrations_tests.yml
ADDED
|
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: integration tests
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
workflow_dispatch:
|
| 5 |
+
inputs:
|
| 6 |
+
branch:
|
| 7 |
+
description: 'Branch to test on'
|
| 8 |
+
required: true
|
| 9 |
+
|
| 10 |
+
permissions: {}
|
| 11 |
+
|
| 12 |
+
jobs:
|
| 13 |
+
run_transformers_integration_tests:
|
| 14 |
+
strategy:
|
| 15 |
+
fail-fast: false
|
| 16 |
+
matrix:
|
| 17 |
+
transformers-version: ['main', 'latest']
|
| 18 |
+
runs-on: ubuntu-latest
|
| 19 |
+
steps:
|
| 20 |
+
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 21 |
+
with:
|
| 22 |
+
ref: ${{ github.event.inputs.branch }}
|
| 23 |
+
repository: ${{ github.event.pull_request.head.repo.full_name }}
|
| 24 |
+
persist-credentials: false
|
| 25 |
+
- name: Set up Python
|
| 26 |
+
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
|
| 27 |
+
with:
|
| 28 |
+
python-version: "3.10"
|
| 29 |
+
cache: "pip"
|
| 30 |
+
cache-dependency-path: "setup.py"
|
| 31 |
+
- name: print environment variables
|
| 32 |
+
run: |
|
| 33 |
+
echo "env.CI_BRANCH = ${CI_BRANCH}"
|
| 34 |
+
echo "env.CI_SHA = ${CI_SHA}"
|
| 35 |
+
- name: Install dependencies
|
| 36 |
+
run: |
|
| 37 |
+
python -m pip install --upgrade pip
|
| 38 |
+
python -m pip install .[test]
|
| 39 |
+
if [ "${{ matrix.transformers-version }}" == "main" ]; then
|
| 40 |
+
pip install -U git+https://github.com/huggingface/transformers.git
|
| 41 |
+
else
|
| 42 |
+
echo "Nothing to do as transformers latest already installed"
|
| 43 |
+
fi
|
| 44 |
+
|
| 45 |
+
- name: Test transformers integration
|
| 46 |
+
run: |
|
| 47 |
+
cd .. && git clone https://github.com/huggingface/transformers.git && cd transformers/ && git rev-parse HEAD
|
| 48 |
+
RUN_SLOW=1 pytest tests/peft_integration/test_peft_integration.py
|
| 49 |
+
run_diffusers_integration_tests:
|
| 50 |
+
strategy:
|
| 51 |
+
fail-fast: false
|
| 52 |
+
matrix:
|
| 53 |
+
# For now diffusers integration is not on PyPI
|
| 54 |
+
diffusers-version: ['main']
|
| 55 |
+
runs-on: ubuntu-latest
|
| 56 |
+
steps:
|
| 57 |
+
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 58 |
+
with:
|
| 59 |
+
ref: ${{ github.event.inputs.branch }}
|
| 60 |
+
repository: ${{ github.event.pull_request.head.repo.full_name }}
|
| 61 |
+
persist-credentials: false
|
| 62 |
+
- name: Set up Python
|
| 63 |
+
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
|
| 64 |
+
with:
|
| 65 |
+
python-version: "3.10"
|
| 66 |
+
cache: "pip"
|
| 67 |
+
cache-dependency-path: "setup.py"
|
| 68 |
+
- name: print environment variables
|
| 69 |
+
run: |
|
| 70 |
+
echo "env.CI_BRANCH = ${CI_BRANCH}"
|
| 71 |
+
echo "env.CI_SHA = ${CI_SHA}"
|
| 72 |
+
- name: Install dependencies
|
| 73 |
+
run: |
|
| 74 |
+
python -m pip install --upgrade pip
|
| 75 |
+
python -m pip install .[test]
|
| 76 |
+
|
| 77 |
+
if [ "${{ matrix.diffusers-version }}" == "main" ]; then
|
| 78 |
+
pip install -U git+https://github.com/huggingface/diffusers.git
|
| 79 |
+
else
|
| 80 |
+
echo "Nothing to do as diffusers latest already installed"
|
| 81 |
+
fi
|
| 82 |
+
|
| 83 |
+
- name: Test diffusers integration
|
| 84 |
+
run: |
|
| 85 |
+
cd .. && git clone https://github.com/huggingface/diffusers.git && cd diffusers/ && git rev-parse HEAD
|
| 86 |
+
pytest tests/lora/test_lora_layers_peft.py
|
peft/.github/workflows/nightly-bnb.yml
ADDED
|
@@ -0,0 +1,249 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: BNB from source self-hosted runner with slow tests (scheduled)
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
workflow_dispatch:
|
| 5 |
+
schedule:
|
| 6 |
+
- cron: "0 2 * * *"
|
| 7 |
+
|
| 8 |
+
env:
|
| 9 |
+
RUN_SLOW: "yes"
|
| 10 |
+
IS_GITHUB_CI: "1"
|
| 11 |
+
# To be able to run tests on CUDA 12.2
|
| 12 |
+
NVIDIA_DISABLE_REQUIRE: "1"
|
| 13 |
+
SLACK_API_TOKEN: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 14 |
+
|
| 15 |
+
permissions: {}
|
| 16 |
+
|
| 17 |
+
jobs:
|
| 18 |
+
run_all_tests_single_gpu:
|
| 19 |
+
timeout-minutes: 60
|
| 20 |
+
strategy:
|
| 21 |
+
fail-fast: false
|
| 22 |
+
matrix:
|
| 23 |
+
docker-image-name: ["huggingface/peft-gpu-bnb-source:latest", "huggingface/peft-gpu-bnb-latest:latest"]
|
| 24 |
+
runs-on:
|
| 25 |
+
group: aws-g6-4xlarge-plus
|
| 26 |
+
env:
|
| 27 |
+
CUDA_VISIBLE_DEVICES: "0"
|
| 28 |
+
TEST_TYPE: "single_gpu_${{ matrix.docker-image-name }}"
|
| 29 |
+
container:
|
| 30 |
+
image: ${{ matrix.docker-image-name }}
|
| 31 |
+
options: --gpus all --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
|
| 32 |
+
defaults:
|
| 33 |
+
run:
|
| 34 |
+
shell: bash
|
| 35 |
+
steps:
|
| 36 |
+
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 37 |
+
with:
|
| 38 |
+
persist-credentials: false
|
| 39 |
+
- name: Pip install
|
| 40 |
+
run: |
|
| 41 |
+
source activate peft
|
| 42 |
+
pip install -e . --no-deps
|
| 43 |
+
pip install pytest-reportlog pytest-cov parameterized datasets scipy einops
|
| 44 |
+
pip install "pytest>=7.2.0,<8.0.0" # see: https://github.com/huggingface/transformers/blob/ce4fff0be7f6464d713f7ac3e0bbaafbc6959ae5/setup.py#L148C6-L148C26
|
| 45 |
+
mkdir transformers-clone && git clone https://github.com/huggingface/transformers.git transformers-clone # rename to transformers clone to avoid modules conflict
|
| 46 |
+
if [ "${{ matrix.docker-image-name }}" == "huggingface/peft-gpu-bnb-latest:latest" ]; then
|
| 47 |
+
cd transformers-clone
|
| 48 |
+
transformers_version=$(pip show transformers | grep '^Version:' | cut -d ' ' -f2 | sed 's/\.dev0//')
|
| 49 |
+
echo "Checking out tag for Transformers version: v$transformers_version"
|
| 50 |
+
git fetch --tags
|
| 51 |
+
git checkout tags/v$transformers_version
|
| 52 |
+
cd ..
|
| 53 |
+
fi
|
| 54 |
+
|
| 55 |
+
- name: Test bnb import
|
| 56 |
+
id: import
|
| 57 |
+
if: always()
|
| 58 |
+
run: |
|
| 59 |
+
source activate peft
|
| 60 |
+
python3 -m bitsandbytes
|
| 61 |
+
python3 -c "import bitsandbytes as bnb"
|
| 62 |
+
|
| 63 |
+
- name: Post to Slack
|
| 64 |
+
if: always()
|
| 65 |
+
uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 66 |
+
with:
|
| 67 |
+
slack_channel: ${{ secrets.BNB_SLACK_CHANNEL_ID }}
|
| 68 |
+
title: 🤗 Results of bitsandbytes import
|
| 69 |
+
status: ${{ steps.import.outcome }}
|
| 70 |
+
slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 71 |
+
|
| 72 |
+
- name: Run examples on single GPU
|
| 73 |
+
id: examples_tests
|
| 74 |
+
if: always()
|
| 75 |
+
run: |
|
| 76 |
+
source activate peft
|
| 77 |
+
make tests_examples_single_gpu_bnb
|
| 78 |
+
|
| 79 |
+
- name: Post to Slack
|
| 80 |
+
if: always()
|
| 81 |
+
uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 82 |
+
with:
|
| 83 |
+
slack_channel: ${{ secrets.BNB_SLACK_CHANNEL_ID }}
|
| 84 |
+
title: 🤗 Results of bitsandbytes examples tests - single GPU
|
| 85 |
+
status: ${{ steps.examples_tests.outcome }}
|
| 86 |
+
slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 87 |
+
|
| 88 |
+
- name: Run core tests on single GPU
|
| 89 |
+
id: core_tests
|
| 90 |
+
if: always()
|
| 91 |
+
run: |
|
| 92 |
+
source activate peft
|
| 93 |
+
make tests_core_single_gpu_bnb
|
| 94 |
+
|
| 95 |
+
- name: Post to Slack
|
| 96 |
+
if: always()
|
| 97 |
+
uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 98 |
+
with:
|
| 99 |
+
slack_channel: ${{ secrets.BNB_SLACK_CHANNEL_ID }}
|
| 100 |
+
title: 🤗 Results of bitsandbytes core tests - single GPU
|
| 101 |
+
status: ${{ steps.core_tests.outcome }}
|
| 102 |
+
slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 103 |
+
|
| 104 |
+
# TODO: this is a test to see if BNB multi-backend single-GPU tests succeed w/o regression tests
|
| 105 |
+
# - name: Run BNB regression tests on single GPU
|
| 106 |
+
# id: regression_tests
|
| 107 |
+
# if: always()
|
| 108 |
+
# run: |
|
| 109 |
+
# source activate peft
|
| 110 |
+
# make tests_gpu_bnb_regression
|
| 111 |
+
|
| 112 |
+
# - name: Post to Slack
|
| 113 |
+
# if: always()
|
| 114 |
+
# uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 115 |
+
# with:
|
| 116 |
+
# slack_channel: ${{ secrets.BNB_SLACK_CHANNEL_ID }}
|
| 117 |
+
# title: 🤗 Results of bitsandbytes regression tests - single GPU
|
| 118 |
+
# status: ${{ steps.regression_tests.outcome }}
|
| 119 |
+
# slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 120 |
+
|
| 121 |
+
- name: Run transformers tests on single GPU
|
| 122 |
+
id: transformers_tests
|
| 123 |
+
if: always()
|
| 124 |
+
run: |
|
| 125 |
+
source activate peft
|
| 126 |
+
make transformers_tests
|
| 127 |
+
|
| 128 |
+
- name: Post to Slack
|
| 129 |
+
if: always()
|
| 130 |
+
uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 131 |
+
with:
|
| 132 |
+
slack_channel: ${{ secrets.BNB_SLACK_CHANNEL_ID }}
|
| 133 |
+
title: 🤗 Results of bitsandbytes transformers tests - single GPU
|
| 134 |
+
status: ${{ steps.transformers_tests.outcome }}
|
| 135 |
+
slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 136 |
+
|
| 137 |
+
- name: Generate Report
|
| 138 |
+
if: always()
|
| 139 |
+
run: |
|
| 140 |
+
pip install slack_sdk tabulate
|
| 141 |
+
python scripts/log_reports.py --slack_channel_name bnb-daily-ci-collab >> $GITHUB_STEP_SUMMARY
|
| 142 |
+
|
| 143 |
+
run_all_tests_multi_gpu:
|
| 144 |
+
timeout-minutes: 60
|
| 145 |
+
strategy:
|
| 146 |
+
fail-fast: false
|
| 147 |
+
matrix:
|
| 148 |
+
docker-image-name: ["huggingface/peft-gpu-bnb-source:latest", "huggingface/peft-gpu-bnb-latest:latest"]
|
| 149 |
+
runs-on:
|
| 150 |
+
group: aws-g6-12xlarge-plus
|
| 151 |
+
env:
|
| 152 |
+
CUDA_VISIBLE_DEVICES: "0,1"
|
| 153 |
+
TEST_TYPE: "multi_gpu_${{ matrix.docker-image-name }}"
|
| 154 |
+
container:
|
| 155 |
+
image: ${{ matrix.docker-image-name }}
|
| 156 |
+
options: --gpus all --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
|
| 157 |
+
defaults:
|
| 158 |
+
run:
|
| 159 |
+
shell: bash
|
| 160 |
+
steps:
|
| 161 |
+
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 162 |
+
with:
|
| 163 |
+
persist-credentials: false
|
| 164 |
+
- name: Pip install
|
| 165 |
+
run: |
|
| 166 |
+
source activate peft
|
| 167 |
+
pip install -e . --no-deps
|
| 168 |
+
pip install pytest-reportlog pytest-cov parameterized datasets scipy einops
|
| 169 |
+
pip install "pytest>=7.2.0,<8.0.0" # see: https://github.com/huggingface/transformers/blob/ce4fff0be7f6464d713f7ac3e0bbaafbc6959ae5/setup.py#L148C6-L148C26
|
| 170 |
+
mkdir transformers-clone && git clone https://github.com/huggingface/transformers.git transformers-clone
|
| 171 |
+
if [ "${{ matrix.docker-image-name }}" == "huggingface/peft-gpu-bnb-latest:latest" ]; then
|
| 172 |
+
cd transformers-clone
|
| 173 |
+
transformers_version=$(pip show transformers | grep '^Version:' | cut -d ' ' -f2 | sed 's/\.dev0//')
|
| 174 |
+
echo "Checking out tag for Transformers version: v$transformers_version"
|
| 175 |
+
git fetch --tags
|
| 176 |
+
git checkout tags/v$transformers_version
|
| 177 |
+
cd ..
|
| 178 |
+
fi
|
| 179 |
+
|
| 180 |
+
- name: Test bnb import
|
| 181 |
+
id: import
|
| 182 |
+
if: always()
|
| 183 |
+
run: |
|
| 184 |
+
source activate peft
|
| 185 |
+
python3 -m bitsandbytes
|
| 186 |
+
python3 -c "import bitsandbytes as bnb"
|
| 187 |
+
|
| 188 |
+
- name: Post to Slack
|
| 189 |
+
if: always()
|
| 190 |
+
uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 191 |
+
with:
|
| 192 |
+
slack_channel: ${{ secrets.BNB_SLACK_CHANNEL_ID }}
|
| 193 |
+
title: 🤗 Results of bitsandbytes import
|
| 194 |
+
status: ${{ steps.import.outcome }}
|
| 195 |
+
slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 196 |
+
|
| 197 |
+
- name: Run examples on multi GPU
|
| 198 |
+
id: examples_tests
|
| 199 |
+
if: always()
|
| 200 |
+
run: |
|
| 201 |
+
source activate peft
|
| 202 |
+
make tests_examples_multi_gpu_bnb
|
| 203 |
+
|
| 204 |
+
- name: Post to Slack
|
| 205 |
+
if: always()
|
| 206 |
+
uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 207 |
+
with:
|
| 208 |
+
slack_channel: ${{ secrets.BNB_SLACK_CHANNEL_ID }}
|
| 209 |
+
title: 🤗 Results of bitsandbytes examples tests - multi GPU
|
| 210 |
+
status: ${{ steps.examples_tests.outcome }}
|
| 211 |
+
slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 212 |
+
|
| 213 |
+
- name: Run core tests on multi GPU
|
| 214 |
+
id: core_tests
|
| 215 |
+
if: always()
|
| 216 |
+
run: |
|
| 217 |
+
source activate peft
|
| 218 |
+
make tests_core_multi_gpu_bnb
|
| 219 |
+
|
| 220 |
+
- name: Post to Slack
|
| 221 |
+
if: always()
|
| 222 |
+
uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 223 |
+
with:
|
| 224 |
+
slack_channel: ${{ secrets.BNB_SLACK_CHANNEL_ID }}
|
| 225 |
+
title: 🤗 Results of bitsandbytes core tests - multi GPU
|
| 226 |
+
status: ${{ steps.core_tests.outcome }}
|
| 227 |
+
slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 228 |
+
|
| 229 |
+
- name: Run transformers tests on multi GPU
|
| 230 |
+
id: transformers_tests
|
| 231 |
+
if: always()
|
| 232 |
+
run: |
|
| 233 |
+
source activate peft
|
| 234 |
+
make transformers_tests
|
| 235 |
+
|
| 236 |
+
- name: Post to Slack
|
| 237 |
+
if: always()
|
| 238 |
+
uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 239 |
+
with:
|
| 240 |
+
slack_channel: ${{ secrets.BNB_SLACK_CHANNEL_ID }}
|
| 241 |
+
title: 🤗 Results of bitsandbytes transformers tests - multi GPU
|
| 242 |
+
status: ${{ steps.transformers_tests.outcome }}
|
| 243 |
+
slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 244 |
+
|
| 245 |
+
- name: Generate Report
|
| 246 |
+
if: always()
|
| 247 |
+
run: |
|
| 248 |
+
pip install slack_sdk tabulate
|
| 249 |
+
python scripts/log_reports.py --slack_channel_name bnb-daily-ci-collab >> $GITHUB_STEP_SUMMARY
|
peft/.github/workflows/nightly.yml
ADDED
|
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: Self-hosted runner with slow tests (scheduled)
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
workflow_dispatch:
|
| 5 |
+
schedule:
|
| 6 |
+
- cron: "0 2 * * *"
|
| 7 |
+
|
| 8 |
+
env:
|
| 9 |
+
RUN_SLOW: "yes"
|
| 10 |
+
IS_GITHUB_CI: "1"
|
| 11 |
+
# To be able to run tests on CUDA 12.2
|
| 12 |
+
NVIDIA_DISABLE_REQUIRE: "1"
|
| 13 |
+
SLACK_API_TOKEN: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
| 14 |
+
|
| 15 |
+
permissions: {}
|
| 16 |
+
|
| 17 |
+
jobs:
|
| 18 |
+
run_all_tests_single_gpu:
|
| 19 |
+
strategy:
|
| 20 |
+
fail-fast: false
|
| 21 |
+
runs-on:
|
| 22 |
+
group: aws-g6-4xlarge-plus
|
| 23 |
+
env:
|
| 24 |
+
CUDA_VISIBLE_DEVICES: "0"
|
| 25 |
+
TEST_TYPE: "single_gpu"
|
| 26 |
+
container:
|
| 27 |
+
image: huggingface/peft-gpu:latest
|
| 28 |
+
options: --gpus all --shm-size "16gb" -e NVIDIA_DISABLE_REQUIRE=true
|
| 29 |
+
defaults:
|
| 30 |
+
run:
|
| 31 |
+
shell: bash
|
| 32 |
+
steps:
|
| 33 |
+
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 34 |
+
with:
|
| 35 |
+
persist-credentials: false
|
| 36 |
+
- name: Pip install
|
| 37 |
+
run: |
|
| 38 |
+
source activate peft
|
| 39 |
+
pip install -e . --no-deps
|
| 40 |
+
pip install pytest-reportlog
|
| 41 |
+
|
| 42 |
+
- name: Run common tests on single GPU
|
| 43 |
+
run: |
|
| 44 |
+
source activate peft
|
| 45 |
+
make tests_common_gpu
|
| 46 |
+
|
| 47 |
+
- name: Run examples on single GPU
|
| 48 |
+
run: |
|
| 49 |
+
source activate peft
|
| 50 |
+
make tests_examples_single_gpu
|
| 51 |
+
|
| 52 |
+
- name: Run core tests on single GPU
|
| 53 |
+
run: |
|
| 54 |
+
source activate peft
|
| 55 |
+
make tests_core_single_gpu
|
| 56 |
+
|
| 57 |
+
- name: Run regression tests on single GPU
|
| 58 |
+
run: |
|
| 59 |
+
source activate peft
|
| 60 |
+
make tests_regression
|
| 61 |
+
|
| 62 |
+
- name: Generate Report
|
| 63 |
+
if: always()
|
| 64 |
+
run: |
|
| 65 |
+
pip install slack_sdk tabulate
|
| 66 |
+
python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY
|
| 67 |
+
|
| 68 |
+
run_all_tests_multi_gpu:
|
| 69 |
+
strategy:
|
| 70 |
+
fail-fast: false
|
| 71 |
+
runs-on:
|
| 72 |
+
group: aws-g6-12xlarge-plus
|
| 73 |
+
env:
|
| 74 |
+
CUDA_VISIBLE_DEVICES: "0,1"
|
| 75 |
+
TEST_TYPE: "multi_gpu"
|
| 76 |
+
container:
|
| 77 |
+
image: huggingface/peft-gpu:latest
|
| 78 |
+
options: --gpus all --shm-size "16gb" -e NVIDIA_DISABLE_REQUIRE=true
|
| 79 |
+
defaults:
|
| 80 |
+
run:
|
| 81 |
+
shell: bash
|
| 82 |
+
steps:
|
| 83 |
+
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 84 |
+
with:
|
| 85 |
+
persist-credentials: false
|
| 86 |
+
- name: Pip install
|
| 87 |
+
run: |
|
| 88 |
+
source activate peft
|
| 89 |
+
pip install -e . --no-deps
|
| 90 |
+
pip install pytest-reportlog
|
| 91 |
+
|
| 92 |
+
- name: Run core GPU tests on multi-gpu
|
| 93 |
+
run: |
|
| 94 |
+
source activate peft
|
| 95 |
+
|
| 96 |
+
- name: Run common tests on multi GPU
|
| 97 |
+
run: |
|
| 98 |
+
source activate peft
|
| 99 |
+
make tests_common_gpu
|
| 100 |
+
|
| 101 |
+
- name: Run examples on multi GPU
|
| 102 |
+
run: |
|
| 103 |
+
source activate peft
|
| 104 |
+
make tests_examples_multi_gpu
|
| 105 |
+
|
| 106 |
+
- name: Run core tests on multi GPU
|
| 107 |
+
run: |
|
| 108 |
+
source activate peft
|
| 109 |
+
make tests_core_multi_gpu
|
| 110 |
+
|
| 111 |
+
- name: Generate Report
|
| 112 |
+
if: always()
|
| 113 |
+
run: |
|
| 114 |
+
pip install slack_sdk tabulate
|
| 115 |
+
python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY
|
peft/.github/workflows/stale.yml
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: Stale Bot
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
schedule:
|
| 5 |
+
- cron: "0 15 * * *"
|
| 6 |
+
|
| 7 |
+
permissions: {}
|
| 8 |
+
|
| 9 |
+
jobs:
|
| 10 |
+
close_stale_issues:
|
| 11 |
+
name: Close Stale Issues
|
| 12 |
+
if: github.repository == 'huggingface/peft'
|
| 13 |
+
runs-on: ubuntu-latest
|
| 14 |
+
permissions:
|
| 15 |
+
issues: write
|
| 16 |
+
pull-requests: write
|
| 17 |
+
env:
|
| 18 |
+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
| 19 |
+
steps:
|
| 20 |
+
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 21 |
+
with:
|
| 22 |
+
persist-credentials: false
|
| 23 |
+
|
| 24 |
+
- name: Setup Python
|
| 25 |
+
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
|
| 26 |
+
with:
|
| 27 |
+
python-version: 3.11
|
| 28 |
+
|
| 29 |
+
- name: Install requirements
|
| 30 |
+
run: |
|
| 31 |
+
pip install PyGithub
|
| 32 |
+
- name: Close stale issues
|
| 33 |
+
run: |
|
| 34 |
+
python scripts/stale.py
|
peft/.github/workflows/test-docker-build.yml
ADDED
|
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: Test Docker images (on PR)
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
pull_request:
|
| 5 |
+
paths:
|
| 6 |
+
# Run only when DockerFile files are modified
|
| 7 |
+
- "docker/*/Dockerfile"
|
| 8 |
+
|
| 9 |
+
permissions: {}
|
| 10 |
+
|
| 11 |
+
jobs:
|
| 12 |
+
get_changed_files:
|
| 13 |
+
name: "Build all modified docker images"
|
| 14 |
+
runs-on: ubuntu-latest
|
| 15 |
+
outputs:
|
| 16 |
+
matrix: ${{ steps.set-matrix.outputs.matrix }}
|
| 17 |
+
steps:
|
| 18 |
+
- name: Check out code
|
| 19 |
+
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 20 |
+
with:
|
| 21 |
+
persist-credentials: false
|
| 22 |
+
- name: Get changed files
|
| 23 |
+
id: changed-files
|
| 24 |
+
uses: tj-actions/changed-files@1c8e6069583811afb28f97afeaf8e7da80c6be5c #v42
|
| 25 |
+
with:
|
| 26 |
+
files: docker/*/Dockerfile
|
| 27 |
+
json: "true"
|
| 28 |
+
- name: Run step if only the files listed above change
|
| 29 |
+
if: steps.changed-files.outputs.any_changed == 'true'
|
| 30 |
+
id: set-matrix
|
| 31 |
+
env:
|
| 32 |
+
ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }}
|
| 33 |
+
run: |
|
| 34 |
+
echo "matrix=${ALL_CHANGED_FILES}" >> $GITHUB_OUTPUT
|
| 35 |
+
build_modified_files:
|
| 36 |
+
needs: get_changed_files
|
| 37 |
+
name: Build Docker images on modified files
|
| 38 |
+
runs-on: ubuntu-latest
|
| 39 |
+
if: ${{ needs.get_changed_files.outputs.matrix != '[]' }}
|
| 40 |
+
strategy:
|
| 41 |
+
fail-fast: false
|
| 42 |
+
matrix:
|
| 43 |
+
docker-file: ${{ fromJson(needs.get_changed_files.outputs.matrix) }}
|
| 44 |
+
steps:
|
| 45 |
+
- name: Cleanup disk
|
| 46 |
+
run: |
|
| 47 |
+
sudo ls -l /usr/local/lib/
|
| 48 |
+
sudo ls -l /usr/share/
|
| 49 |
+
sudo du -sh /usr/local/lib/
|
| 50 |
+
sudo du -sh /usr/share/
|
| 51 |
+
sudo rm -rf /usr/local/lib/android
|
| 52 |
+
sudo rm -rf /usr/share/dotnet
|
| 53 |
+
sudo du -sh /usr/local/lib/
|
| 54 |
+
sudo du -sh /usr/share/
|
| 55 |
+
- name: Set up Docker Buildx
|
| 56 |
+
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
|
| 57 |
+
- name: Check out code
|
| 58 |
+
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 59 |
+
with:
|
| 60 |
+
persist-credentials: false
|
| 61 |
+
- name: Build Docker image
|
| 62 |
+
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
|
| 63 |
+
with:
|
| 64 |
+
file: ${{ matrix.docker-file }}
|
| 65 |
+
context: .
|
| 66 |
+
push: False
|
peft/.github/workflows/tests-main.yml
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: tests on transformers main
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
push:
|
| 5 |
+
branches: [main]
|
| 6 |
+
paths-ignore:
|
| 7 |
+
- 'docs/**'
|
| 8 |
+
|
| 9 |
+
permissions: {}
|
| 10 |
+
|
| 11 |
+
jobs:
|
| 12 |
+
tests:
|
| 13 |
+
runs-on: ubuntu-latest
|
| 14 |
+
steps:
|
| 15 |
+
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 16 |
+
with:
|
| 17 |
+
persist-credentials: false
|
| 18 |
+
- name: Set up Python 3.11
|
| 19 |
+
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
|
| 20 |
+
with:
|
| 21 |
+
python-version: 3.11
|
| 22 |
+
cache: "pip"
|
| 23 |
+
cache-dependency-path: "setup.py"
|
| 24 |
+
- name: Install dependencies
|
| 25 |
+
run: |
|
| 26 |
+
python -m pip install --upgrade pip
|
| 27 |
+
# cpu version of pytorch
|
| 28 |
+
pip install -U git+https://github.com/huggingface/transformers.git
|
| 29 |
+
pip install -e .[test]
|
| 30 |
+
- name: Test with pytest
|
| 31 |
+
env:
|
| 32 |
+
TRANSFORMERS_IS_CI: 1
|
| 33 |
+
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
| 34 |
+
run: |
|
| 35 |
+
make test
|
| 36 |
+
- name: Post to Slack
|
| 37 |
+
if: always()
|
| 38 |
+
uses: huggingface/hf-workflows/.github/actions/post-slack@3f88d63d3761558a32e8e46fc2a8536e04bb2aea # main from Feb 2025-02-24
|
| 39 |
+
with:
|
| 40 |
+
slack_channel: ${{ secrets.SLACK_CHANNEL_ID }}
|
| 41 |
+
title: 🤗 Results of transformers main tests
|
| 42 |
+
status: ${{ job.status }}
|
| 43 |
+
slack_token: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
|
peft/.github/workflows/tests.yml
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: tests
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
push:
|
| 5 |
+
branches: [main]
|
| 6 |
+
paths-ignore:
|
| 7 |
+
- 'docs/**'
|
| 8 |
+
pull_request:
|
| 9 |
+
paths-ignore:
|
| 10 |
+
- 'docs/**'
|
| 11 |
+
|
| 12 |
+
env:
|
| 13 |
+
HF_HOME: .cache/huggingface
|
| 14 |
+
|
| 15 |
+
permissions: {}
|
| 16 |
+
|
| 17 |
+
jobs:
|
| 18 |
+
check_code_quality:
|
| 19 |
+
runs-on: ubuntu-latest
|
| 20 |
+
steps:
|
| 21 |
+
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 22 |
+
with:
|
| 23 |
+
persist-credentials: false
|
| 24 |
+
- name: Set up Python
|
| 25 |
+
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
|
| 26 |
+
with:
|
| 27 |
+
python-version: "3.11"
|
| 28 |
+
cache: "pip"
|
| 29 |
+
cache-dependency-path: "setup.py"
|
| 30 |
+
- name: Install dependencies
|
| 31 |
+
run: |
|
| 32 |
+
python -m pip install --upgrade pip
|
| 33 |
+
pip install .[dev]
|
| 34 |
+
- name: Check quality
|
| 35 |
+
run: |
|
| 36 |
+
make quality
|
| 37 |
+
|
| 38 |
+
tests:
|
| 39 |
+
needs: check_code_quality
|
| 40 |
+
strategy:
|
| 41 |
+
fail-fast: false
|
| 42 |
+
matrix:
|
| 43 |
+
python-version: ["3.10", "3.11", "3.12", "3.13"]
|
| 44 |
+
os: ["ubuntu-latest", "macos-13", "windows-latest"]
|
| 45 |
+
exclude:
|
| 46 |
+
- os: macos-13
|
| 47 |
+
python-version: "3.13"
|
| 48 |
+
runs-on: ${{ matrix.os }}
|
| 49 |
+
steps:
|
| 50 |
+
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 51 |
+
with:
|
| 52 |
+
persist-credentials: false
|
| 53 |
+
- name: Model cache
|
| 54 |
+
uses: actions/cache/restore@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
|
| 55 |
+
with:
|
| 56 |
+
# Avoid caching HF_HOME/modules and Python cache files to prevent interoperability
|
| 57 |
+
# issues and potential cache poisioning. We also avoid lock files to prevent runs
|
| 58 |
+
# avoiding re-download because they see a lock file.
|
| 59 |
+
path: |
|
| 60 |
+
${{ env.HF_HOME }}/hub/**
|
| 61 |
+
!${{ env.HF_HOME }}/**/*.pyc
|
| 62 |
+
key: model-cache-${{ github.run_id }}
|
| 63 |
+
restore-keys: model-cache-
|
| 64 |
+
enableCrossOsArchive: true
|
| 65 |
+
- name: Dump cache content
|
| 66 |
+
# TODO: remove this step after 2025-02-15
|
| 67 |
+
if: matrix.os != 'windows-latest'
|
| 68 |
+
run: |
|
| 69 |
+
SHASUM=sha256sum
|
| 70 |
+
[ -f "$(which shasum)" ] && SHASUM=shasum
|
| 71 |
+
find "${{ env.HF_HOME }}/hub" -type f -exec "$SHASUM" {} \; > cache_content_initial || true
|
| 72 |
+
- name: Set up Python ${{ matrix.python-version }}
|
| 73 |
+
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
|
| 74 |
+
with:
|
| 75 |
+
python-version: ${{ matrix.python-version }}
|
| 76 |
+
cache: "pip"
|
| 77 |
+
cache-dependency-path: "setup.py"
|
| 78 |
+
- name: Install dependencies
|
| 79 |
+
run: |
|
| 80 |
+
python -m pip install --upgrade pip
|
| 81 |
+
pip install setuptools
|
| 82 |
+
# cpu version of pytorch
|
| 83 |
+
pip install -e .[test]
|
| 84 |
+
- name: Test with pytest
|
| 85 |
+
# MacOS tests are currently too flaky and will fail almost each time. Thus, continue (green checkmark) even if
|
| 86 |
+
# they fail, but add a notice so that the failure is not completely silent
|
| 87 |
+
continue-on-error: ${{ matrix.os == 'macos-13' }}
|
| 88 |
+
shell: bash
|
| 89 |
+
env:
|
| 90 |
+
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
| 91 |
+
TRANSFORMERS_IS_CI: 1
|
| 92 |
+
run: |
|
| 93 |
+
set +e
|
| 94 |
+
make test
|
| 95 |
+
status=$?
|
| 96 |
+
# Post a notice only if this is macOS AND tests failed
|
| 97 |
+
if [ "$status" -ne 0 ] && [ "${{ matrix.os }}" = "macos-13" ]; then
|
| 98 |
+
{
|
| 99 |
+
echo "## ⚠️ macOS tests failed"
|
| 100 |
+
echo ""
|
| 101 |
+
echo "- OS: ${{ matrix.os }}"
|
| 102 |
+
echo "- Python: ${{ matrix.python-version }}"
|
| 103 |
+
echo ""
|
| 104 |
+
echo "Check the logs from this step for details."
|
| 105 |
+
} >> "$GITHUB_STEP_SUMMARY"
|
| 106 |
+
fi
|
| 107 |
+
# Return the real status. On macOS this won't fail the job because of continue-on-error.
|
| 108 |
+
exit $status
|
| 109 |
+
- name: Dump cache content and diff
|
| 110 |
+
# This is just debug info so that we can monitor if the model cache diverges substantially
|
| 111 |
+
# over time and what the diverging model is.
|
| 112 |
+
# TODO: remove after 2025-02-15
|
| 113 |
+
if: matrix.os != 'windows-latest'
|
| 114 |
+
run: |
|
| 115 |
+
SHASUM=sha256sum
|
| 116 |
+
[ -f "$(which shasum)" ] && SHASUM=shasum
|
| 117 |
+
find "${{ env.HF_HOME }}/hub" -type f -exec "$SHASUM" {} \; > cache_content_after || true
|
| 118 |
+
diff -udp cache_content_initial cache_content_after || true
|
| 119 |
+
- name: Delete old model cache entries
|
| 120 |
+
run: |
|
| 121 |
+
# make sure that cache cleaning doesn't break the pipeline
|
| 122 |
+
python scripts/ci_clean_cache.py -d || true
|
| 123 |
+
- name: Update model cache
|
| 124 |
+
uses: actions/cache/save@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
|
| 125 |
+
# Only let one runner (preferably the one that covers most tests) update the model cache
|
| 126 |
+
# after *every* run. This way we make sure that our cache is never outdated and we don't
|
| 127 |
+
# have to keep track of hashes.
|
| 128 |
+
if: always() && matrix.os == 'ubuntu-latest' && matrix.python-version == '3.10'
|
| 129 |
+
with:
|
| 130 |
+
path: |
|
| 131 |
+
${{ env.HF_HOME }}/hub/**
|
| 132 |
+
!${{ env.HF_HOME }}/**/*.pyc
|
| 133 |
+
key: model-cache-${{ github.run_id }}
|
peft/.github/workflows/torch_compile_tests.yml
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: torch compile tests
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
workflow_dispatch:
|
| 5 |
+
inputs:
|
| 6 |
+
branch:
|
| 7 |
+
description: 'Branch to test on'
|
| 8 |
+
required: true
|
| 9 |
+
pytorch_nightly:
|
| 10 |
+
description: 'Whether to use PyTorch nightly (true/false)'
|
| 11 |
+
required: false
|
| 12 |
+
default: false
|
| 13 |
+
|
| 14 |
+
env:
|
| 15 |
+
RUN_SLOW: "yes"
|
| 16 |
+
IS_GITHUB_CI: "1"
|
| 17 |
+
# To be able to run tests on CUDA 12.2
|
| 18 |
+
NVIDIA_DISABLE_REQUIRE: "1"
|
| 19 |
+
|
| 20 |
+
permissions: {}
|
| 21 |
+
|
| 22 |
+
jobs:
|
| 23 |
+
run_tests_with_compile:
|
| 24 |
+
runs-on:
|
| 25 |
+
group: aws-g6-4xlarge-plus
|
| 26 |
+
env:
|
| 27 |
+
PEFT_DEBUG_WITH_TORCH_COMPILE: 1
|
| 28 |
+
CUDA_VISIBLE_DEVICES: "0"
|
| 29 |
+
TEST_TYPE: "single_gpu_huggingface/peft-gpu-bnb-latest:latest"
|
| 30 |
+
USE_PYTORCH_NIGHTLY: "${{ github.event.inputs.pytorch_nightly }}"
|
| 31 |
+
container:
|
| 32 |
+
image: "huggingface/peft-gpu-bnb-latest:latest"
|
| 33 |
+
options: --gpus all --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
|
| 34 |
+
defaults:
|
| 35 |
+
run:
|
| 36 |
+
shell: bash
|
| 37 |
+
steps:
|
| 38 |
+
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 39 |
+
with:
|
| 40 |
+
ref: ${{ github.event.inputs.branch }}
|
| 41 |
+
repository: ${{ github.event.pull_request.head.repo.full_name }}
|
| 42 |
+
persist-credentials: false
|
| 43 |
+
- name: Pip install
|
| 44 |
+
run: |
|
| 45 |
+
source activate peft
|
| 46 |
+
pip install -e . --no-deps
|
| 47 |
+
pip install pytest-cov pytest-reportlog parameterized datasets scipy einops
|
| 48 |
+
pip install "pytest>=7.2.0,<8.0.0" # see: https://github.com/huggingface/transformers/blob/ce4fff0be7f6464d713f7ac3e0bbaafbc6959ae5/setup.py#L148C6-L148C26
|
| 49 |
+
if [ "${USE_PYTORCH_NIGHTLY}" = "true" ]; then
|
| 50 |
+
python -m pip install --upgrade --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu
|
| 51 |
+
fi
|
| 52 |
+
- name: Test compile with pytest
|
| 53 |
+
run: |
|
| 54 |
+
source activate peft
|
| 55 |
+
echo "PEFT_DEBUG_WITH_TORCH_COMPILE=$PEFT_DEBUG_WITH_TORCH_COMPILE"
|
| 56 |
+
make tests_torch_compile
|
peft/.github/workflows/trufflehog.yml
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
on:
|
| 2 |
+
push:
|
| 3 |
+
|
| 4 |
+
name: Secret Leaks
|
| 5 |
+
|
| 6 |
+
permissions: {}
|
| 7 |
+
|
| 8 |
+
jobs:
|
| 9 |
+
trufflehog:
|
| 10 |
+
runs-on: ubuntu-latest
|
| 11 |
+
steps:
|
| 12 |
+
- name: Checkout code
|
| 13 |
+
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 14 |
+
with:
|
| 15 |
+
fetch-depth: 0
|
| 16 |
+
persist-credentials: false
|
| 17 |
+
- name: Secret Scanning
|
| 18 |
+
uses: trufflesecurity/trufflehog@0f58ae7c5036094a1e3e750d18772af92821b503 # v3.90.5
|
peft/.github/workflows/upload_pr_documentation.yml
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: Upload PR Documentation
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
workflow_run:
|
| 5 |
+
workflows: ["Build PR Documentation"]
|
| 6 |
+
types:
|
| 7 |
+
- completed
|
| 8 |
+
|
| 9 |
+
permissions: {}
|
| 10 |
+
|
| 11 |
+
jobs:
|
| 12 |
+
build:
|
| 13 |
+
uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@ba4b74d11c46d884a4cf6497687c090f55f027d9 # main from 2025-09-05
|
| 14 |
+
with:
|
| 15 |
+
package_name: peft
|
| 16 |
+
secrets:
|
| 17 |
+
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
|
| 18 |
+
comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}
|
peft/.github/workflows/zizmor.yaml
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: CI security linting
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
push:
|
| 5 |
+
branches: ["main"]
|
| 6 |
+
pull_request:
|
| 7 |
+
branches: ["*"]
|
| 8 |
+
paths:
|
| 9 |
+
- '.github/**'
|
| 10 |
+
|
| 11 |
+
permissions: {}
|
| 12 |
+
|
| 13 |
+
jobs:
|
| 14 |
+
zizmor:
|
| 15 |
+
name: zizmor latest via Cargo
|
| 16 |
+
runs-on: ubuntu-latest
|
| 17 |
+
permissions:
|
| 18 |
+
contents: read
|
| 19 |
+
security-events: write
|
| 20 |
+
steps:
|
| 21 |
+
- name: Checkout repository
|
| 22 |
+
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
| 23 |
+
with:
|
| 24 |
+
persist-credentials: false
|
| 25 |
+
- name: Install zizmor
|
| 26 |
+
run: cargo install --locked zizmor
|
| 27 |
+
- name: Run zizmor
|
| 28 |
+
run: zizmor .github/workflows
|
peft/.github/zizmor.yml
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
rules:
|
| 2 |
+
dangerous-triggers:
|
| 3 |
+
ignore:
|
| 4 |
+
# this workflow is only triggered after maintainer approval
|
| 5 |
+
- upload_pr_documentation.yml:3:1
|
| 6 |
+
cache-poisoning:
|
| 7 |
+
ignore:
|
| 8 |
+
# the docker buildx binary is cached and zizmor warns about a cache poisoning attack.
|
| 9 |
+
# OTOH this cache would make us more resilient against an intrusion on docker-buildx' side.
|
| 10 |
+
# There is no obvious benefit so we leave it as it is.
|
| 11 |
+
- build_docker_images.yml:37:9
|
| 12 |
+
- build_docker_images.yml:70:9
|
| 13 |
+
- build_docker_images.yml:103:9
|
| 14 |
+
- build_docker_images.yml:136:9
|
| 15 |
+
- build_docker_images.yml:169:9
|
| 16 |
+
unpinned-images:
|
| 17 |
+
ignore:
|
| 18 |
+
# We want to test these images with the latest version and we're not using them
|
| 19 |
+
# to deploy anything so we deem it safe to use those, even if they are unpinned.
|
| 20 |
+
- nightly-bnb.yml:30:7
|
| 21 |
+
- nightly-bnb.yml:155:7
|
| 22 |
+
- nightly.yml:27:7
|
| 23 |
+
- nightly.yml:77:7
|
| 24 |
+
- torch_compile_tests.yml:32:7
|
peft/.gitignore
ADDED
|
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Byte-compiled / optimized / DLL files
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[cod]
|
| 4 |
+
*$py.class
|
| 5 |
+
|
| 6 |
+
# C extensions
|
| 7 |
+
*.so
|
| 8 |
+
|
| 9 |
+
# Distribution / packaging
|
| 10 |
+
.Python
|
| 11 |
+
build/
|
| 12 |
+
develop-eggs/
|
| 13 |
+
dist/
|
| 14 |
+
downloads/
|
| 15 |
+
eggs/
|
| 16 |
+
.eggs/
|
| 17 |
+
lib/
|
| 18 |
+
lib64/
|
| 19 |
+
parts/
|
| 20 |
+
sdist/
|
| 21 |
+
var/
|
| 22 |
+
wheels/
|
| 23 |
+
pip-wheel-metadata/
|
| 24 |
+
share/python-wheels/
|
| 25 |
+
*.egg-info/
|
| 26 |
+
.installed.cfg
|
| 27 |
+
*.egg
|
| 28 |
+
MANIFEST
|
| 29 |
+
|
| 30 |
+
# PyInstaller
|
| 31 |
+
# Usually these files are written by a python script from a template
|
| 32 |
+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
| 33 |
+
*.manifest
|
| 34 |
+
*.spec
|
| 35 |
+
|
| 36 |
+
# Installer logs
|
| 37 |
+
pip-log.txt
|
| 38 |
+
pip-delete-this-directory.txt
|
| 39 |
+
|
| 40 |
+
# Unit test / coverage reports
|
| 41 |
+
htmlcov/
|
| 42 |
+
.tox/
|
| 43 |
+
.nox/
|
| 44 |
+
.coverage
|
| 45 |
+
.coverage.*
|
| 46 |
+
.cache
|
| 47 |
+
nosetests.xml
|
| 48 |
+
coverage.xml
|
| 49 |
+
*.cover
|
| 50 |
+
*.py,cover
|
| 51 |
+
.hypothesis/
|
| 52 |
+
.pytest_cache/
|
| 53 |
+
|
| 54 |
+
# Translations
|
| 55 |
+
*.mo
|
| 56 |
+
*.pot
|
| 57 |
+
|
| 58 |
+
# Django stuff:
|
| 59 |
+
*.log
|
| 60 |
+
local_settings.py
|
| 61 |
+
db.sqlite3
|
| 62 |
+
db.sqlite3-journal
|
| 63 |
+
|
| 64 |
+
# Flask stuff:
|
| 65 |
+
instance/
|
| 66 |
+
.webassets-cache
|
| 67 |
+
|
| 68 |
+
# Scrapy stuff:
|
| 69 |
+
.scrapy
|
| 70 |
+
|
| 71 |
+
# Sphinx documentation
|
| 72 |
+
docs/_build/
|
| 73 |
+
|
| 74 |
+
# PyBuilder
|
| 75 |
+
target/
|
| 76 |
+
|
| 77 |
+
# Jupyter Notebook
|
| 78 |
+
.ipynb_checkpoints
|
| 79 |
+
|
| 80 |
+
# IPython
|
| 81 |
+
profile_default/
|
| 82 |
+
ipython_config.py
|
| 83 |
+
|
| 84 |
+
# pyenv
|
| 85 |
+
.python-version
|
| 86 |
+
|
| 87 |
+
# pipenv
|
| 88 |
+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
| 89 |
+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
| 90 |
+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
| 91 |
+
# install all needed dependencies.
|
| 92 |
+
#Pipfile.lock
|
| 93 |
+
|
| 94 |
+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
|
| 95 |
+
__pypackages__/
|
| 96 |
+
|
| 97 |
+
# Celery stuff
|
| 98 |
+
celerybeat-schedule
|
| 99 |
+
celerybeat.pid
|
| 100 |
+
|
| 101 |
+
# SageMath parsed files
|
| 102 |
+
*.sage.py
|
| 103 |
+
|
| 104 |
+
# Environments
|
| 105 |
+
.env
|
| 106 |
+
.venv
|
| 107 |
+
env/
|
| 108 |
+
venv/
|
| 109 |
+
ENV/
|
| 110 |
+
env.bak/
|
| 111 |
+
venv.bak/
|
| 112 |
+
|
| 113 |
+
# Spyder project settings
|
| 114 |
+
.spyderproject
|
| 115 |
+
.spyproject
|
| 116 |
+
|
| 117 |
+
# Rope project settings
|
| 118 |
+
.ropeproject
|
| 119 |
+
|
| 120 |
+
# mkdocs documentation
|
| 121 |
+
/site
|
| 122 |
+
|
| 123 |
+
# mypy
|
| 124 |
+
.mypy_cache/
|
| 125 |
+
.dmypy.json
|
| 126 |
+
dmypy.json
|
| 127 |
+
|
| 128 |
+
# Pyre type checker
|
| 129 |
+
.pyre/
|
| 130 |
+
|
| 131 |
+
# VSCode
|
| 132 |
+
.vscode
|
| 133 |
+
|
| 134 |
+
# IntelliJ
|
| 135 |
+
.idea
|
| 136 |
+
|
| 137 |
+
# Mac .DS_Store
|
| 138 |
+
.DS_Store
|
| 139 |
+
|
| 140 |
+
# More test things
|
| 141 |
+
wandb
|
| 142 |
+
|
| 143 |
+
# method_comparison logs
|
| 144 |
+
method_comparison/MetaMathQA/cancelled_results/
|
| 145 |
+
method_comparison/MetaMathQA/temporary_results/
|
peft/.pre-commit-config.yaml
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
repos:
|
| 2 |
+
- repo: https://github.com/astral-sh/ruff-pre-commit
|
| 3 |
+
rev: v0.12.8
|
| 4 |
+
hooks:
|
| 5 |
+
- id: ruff
|
| 6 |
+
args:
|
| 7 |
+
- --fix
|
| 8 |
+
- id: ruff-format
|
| 9 |
+
- repo: https://github.com/pre-commit/pre-commit-hooks
|
| 10 |
+
rev: v4.6.0
|
| 11 |
+
hooks:
|
| 12 |
+
- id: check-merge-conflict
|
| 13 |
+
- id: check-yaml
|
peft/LICENSE
ADDED
|
@@ -0,0 +1,201 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Apache License
|
| 2 |
+
Version 2.0, January 2004
|
| 3 |
+
http://www.apache.org/licenses/
|
| 4 |
+
|
| 5 |
+
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
| 6 |
+
|
| 7 |
+
1. Definitions.
|
| 8 |
+
|
| 9 |
+
"License" shall mean the terms and conditions for use, reproduction,
|
| 10 |
+
and distribution as defined by Sections 1 through 9 of this document.
|
| 11 |
+
|
| 12 |
+
"Licensor" shall mean the copyright owner or entity authorized by
|
| 13 |
+
the copyright owner that is granting the License.
|
| 14 |
+
|
| 15 |
+
"Legal Entity" shall mean the union of the acting entity and all
|
| 16 |
+
other entities that control, are controlled by, or are under common
|
| 17 |
+
control with that entity. For the purposes of this definition,
|
| 18 |
+
"control" means (i) the power, direct or indirect, to cause the
|
| 19 |
+
direction or management of such entity, whether by contract or
|
| 20 |
+
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
| 21 |
+
outstanding shares, or (iii) beneficial ownership of such entity.
|
| 22 |
+
|
| 23 |
+
"You" (or "Your") shall mean an individual or Legal Entity
|
| 24 |
+
exercising permissions granted by this License.
|
| 25 |
+
|
| 26 |
+
"Source" form shall mean the preferred form for making modifications,
|
| 27 |
+
including but not limited to software source code, documentation
|
| 28 |
+
source, and configuration files.
|
| 29 |
+
|
| 30 |
+
"Object" form shall mean any form resulting from mechanical
|
| 31 |
+
transformation or translation of a Source form, including but
|
| 32 |
+
not limited to compiled object code, generated documentation,
|
| 33 |
+
and conversions to other media types.
|
| 34 |
+
|
| 35 |
+
"Work" shall mean the work of authorship, whether in Source or
|
| 36 |
+
Object form, made available under the License, as indicated by a
|
| 37 |
+
copyright notice that is included in or attached to the work
|
| 38 |
+
(an example is provided in the Appendix below).
|
| 39 |
+
|
| 40 |
+
"Derivative Works" shall mean any work, whether in Source or Object
|
| 41 |
+
form, that is based on (or derived from) the Work and for which the
|
| 42 |
+
editorial revisions, annotations, elaborations, or other modifications
|
| 43 |
+
represent, as a whole, an original work of authorship. For the purposes
|
| 44 |
+
of this License, Derivative Works shall not include works that remain
|
| 45 |
+
separable from, or merely link (or bind by name) to the interfaces of,
|
| 46 |
+
the Work and Derivative Works thereof.
|
| 47 |
+
|
| 48 |
+
"Contribution" shall mean any work of authorship, including
|
| 49 |
+
the original version of the Work and any modifications or additions
|
| 50 |
+
to that Work or Derivative Works thereof, that is intentionally
|
| 51 |
+
submitted to Licensor for inclusion in the Work by the copyright owner
|
| 52 |
+
or by an individual or Legal Entity authorized to submit on behalf of
|
| 53 |
+
the copyright owner. For the purposes of this definition, "submitted"
|
| 54 |
+
means any form of electronic, verbal, or written communication sent
|
| 55 |
+
to the Licensor or its representatives, including but not limited to
|
| 56 |
+
communication on electronic mailing lists, source code control systems,
|
| 57 |
+
and issue tracking systems that are managed by, or on behalf of, the
|
| 58 |
+
Licensor for the purpose of discussing and improving the Work, but
|
| 59 |
+
excluding communication that is conspicuously marked or otherwise
|
| 60 |
+
designated in writing by the copyright owner as "Not a Contribution."
|
| 61 |
+
|
| 62 |
+
"Contributor" shall mean Licensor and any individual or Legal Entity
|
| 63 |
+
on behalf of whom a Contribution has been received by Licensor and
|
| 64 |
+
subsequently incorporated within the Work.
|
| 65 |
+
|
| 66 |
+
2. Grant of Copyright License. Subject to the terms and conditions of
|
| 67 |
+
this License, each Contributor hereby grants to You a perpetual,
|
| 68 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
| 69 |
+
copyright license to reproduce, prepare Derivative Works of,
|
| 70 |
+
publicly display, publicly perform, sublicense, and distribute the
|
| 71 |
+
Work and such Derivative Works in Source or Object form.
|
| 72 |
+
|
| 73 |
+
3. Grant of Patent License. Subject to the terms and conditions of
|
| 74 |
+
this License, each Contributor hereby grants to You a perpetual,
|
| 75 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
| 76 |
+
(except as stated in this section) patent license to make, have made,
|
| 77 |
+
use, offer to sell, sell, import, and otherwise transfer the Work,
|
| 78 |
+
where such license applies only to those patent claims licensable
|
| 79 |
+
by such Contributor that are necessarily infringed by their
|
| 80 |
+
Contribution(s) alone or by combination of their Contribution(s)
|
| 81 |
+
with the Work to which such Contribution(s) was submitted. If You
|
| 82 |
+
institute patent litigation against any entity (including a
|
| 83 |
+
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
| 84 |
+
or a Contribution incorporated within the Work constitutes direct
|
| 85 |
+
or contributory patent infringement, then any patent licenses
|
| 86 |
+
granted to You under this License for that Work shall terminate
|
| 87 |
+
as of the date such litigation is filed.
|
| 88 |
+
|
| 89 |
+
4. Redistribution. You may reproduce and distribute copies of the
|
| 90 |
+
Work or Derivative Works thereof in any medium, with or without
|
| 91 |
+
modifications, and in Source or Object form, provided that You
|
| 92 |
+
meet the following conditions:
|
| 93 |
+
|
| 94 |
+
(a) You must give any other recipients of the Work or
|
| 95 |
+
Derivative Works a copy of this License; and
|
| 96 |
+
|
| 97 |
+
(b) You must cause any modified files to carry prominent notices
|
| 98 |
+
stating that You changed the files; and
|
| 99 |
+
|
| 100 |
+
(c) You must retain, in the Source form of any Derivative Works
|
| 101 |
+
that You distribute, all copyright, patent, trademark, and
|
| 102 |
+
attribution notices from the Source form of the Work,
|
| 103 |
+
excluding those notices that do not pertain to any part of
|
| 104 |
+
the Derivative Works; and
|
| 105 |
+
|
| 106 |
+
(d) If the Work includes a "NOTICE" text file as part of its
|
| 107 |
+
distribution, then any Derivative Works that You distribute must
|
| 108 |
+
include a readable copy of the attribution notices contained
|
| 109 |
+
within such NOTICE file, excluding those notices that do not
|
| 110 |
+
pertain to any part of the Derivative Works, in at least one
|
| 111 |
+
of the following places: within a NOTICE text file distributed
|
| 112 |
+
as part of the Derivative Works; within the Source form or
|
| 113 |
+
documentation, if provided along with the Derivative Works; or,
|
| 114 |
+
within a display generated by the Derivative Works, if and
|
| 115 |
+
wherever such third-party notices normally appear. The contents
|
| 116 |
+
of the NOTICE file are for informational purposes only and
|
| 117 |
+
do not modify the License. You may add Your own attribution
|
| 118 |
+
notices within Derivative Works that You distribute, alongside
|
| 119 |
+
or as an addendum to the NOTICE text from the Work, provided
|
| 120 |
+
that such additional attribution notices cannot be construed
|
| 121 |
+
as modifying the License.
|
| 122 |
+
|
| 123 |
+
You may add Your own copyright statement to Your modifications and
|
| 124 |
+
may provide additional or different license terms and conditions
|
| 125 |
+
for use, reproduction, or distribution of Your modifications, or
|
| 126 |
+
for any such Derivative Works as a whole, provided Your use,
|
| 127 |
+
reproduction, and distribution of the Work otherwise complies with
|
| 128 |
+
the conditions stated in this License.
|
| 129 |
+
|
| 130 |
+
5. Submission of Contributions. Unless You explicitly state otherwise,
|
| 131 |
+
any Contribution intentionally submitted for inclusion in the Work
|
| 132 |
+
by You to the Licensor shall be under the terms and conditions of
|
| 133 |
+
this License, without any additional terms or conditions.
|
| 134 |
+
Notwithstanding the above, nothing herein shall supersede or modify
|
| 135 |
+
the terms of any separate license agreement you may have executed
|
| 136 |
+
with Licensor regarding such Contributions.
|
| 137 |
+
|
| 138 |
+
6. Trademarks. This License does not grant permission to use the trade
|
| 139 |
+
names, trademarks, service marks, or product names of the Licensor,
|
| 140 |
+
except as required for reasonable and customary use in describing the
|
| 141 |
+
origin of the Work and reproducing the content of the NOTICE file.
|
| 142 |
+
|
| 143 |
+
7. Disclaimer of Warranty. Unless required by applicable law or
|
| 144 |
+
agreed to in writing, Licensor provides the Work (and each
|
| 145 |
+
Contributor provides its Contributions) on an "AS IS" BASIS,
|
| 146 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
| 147 |
+
implied, including, without limitation, any warranties or conditions
|
| 148 |
+
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
| 149 |
+
PARTICULAR PURPOSE. You are solely responsible for determining the
|
| 150 |
+
appropriateness of using or redistributing the Work and assume any
|
| 151 |
+
risks associated with Your exercise of permissions under this License.
|
| 152 |
+
|
| 153 |
+
8. Limitation of Liability. In no event and under no legal theory,
|
| 154 |
+
whether in tort (including negligence), contract, or otherwise,
|
| 155 |
+
unless required by applicable law (such as deliberate and grossly
|
| 156 |
+
negligent acts) or agreed to in writing, shall any Contributor be
|
| 157 |
+
liable to You for damages, including any direct, indirect, special,
|
| 158 |
+
incidental, or consequential damages of any character arising as a
|
| 159 |
+
result of this License or out of the use or inability to use the
|
| 160 |
+
Work (including but not limited to damages for loss of goodwill,
|
| 161 |
+
work stoppage, computer failure or malfunction, or any and all
|
| 162 |
+
other commercial damages or losses), even if such Contributor
|
| 163 |
+
has been advised of the possibility of such damages.
|
| 164 |
+
|
| 165 |
+
9. Accepting Warranty or Additional Liability. While redistributing
|
| 166 |
+
the Work or Derivative Works thereof, You may choose to offer,
|
| 167 |
+
and charge a fee for, acceptance of support, warranty, indemnity,
|
| 168 |
+
or other liability obligations and/or rights consistent with this
|
| 169 |
+
License. However, in accepting such obligations, You may act only
|
| 170 |
+
on Your own behalf and on Your sole responsibility, not on behalf
|
| 171 |
+
of any other Contributor, and only if You agree to indemnify,
|
| 172 |
+
defend, and hold each Contributor harmless for any liability
|
| 173 |
+
incurred by, or claims asserted against, such Contributor by reason
|
| 174 |
+
of your accepting any such warranty or additional liability.
|
| 175 |
+
|
| 176 |
+
END OF TERMS AND CONDITIONS
|
| 177 |
+
|
| 178 |
+
APPENDIX: How to apply the Apache License to your work.
|
| 179 |
+
|
| 180 |
+
To apply the Apache License to your work, attach the following
|
| 181 |
+
boilerplate notice, with the fields enclosed by brackets "[]"
|
| 182 |
+
replaced with your own identifying information. (Don't include
|
| 183 |
+
the brackets!) The text should be enclosed in the appropriate
|
| 184 |
+
comment syntax for the file format. We also recommend that a
|
| 185 |
+
file or class name and description of purpose be included on the
|
| 186 |
+
same "printed page" as the copyright notice for easier
|
| 187 |
+
identification within third-party archives.
|
| 188 |
+
|
| 189 |
+
Copyright [yyyy] [name of copyright owner]
|
| 190 |
+
|
| 191 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
| 192 |
+
you may not use this file except in compliance with the License.
|
| 193 |
+
You may obtain a copy of the License at
|
| 194 |
+
|
| 195 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 196 |
+
|
| 197 |
+
Unless required by applicable law or agreed to in writing, software
|
| 198 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
| 199 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 200 |
+
See the License for the specific language governing permissions and
|
| 201 |
+
limitations under the License.
|
peft/Makefile
ADDED
|
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
.PHONY: quality style test docs
|
| 2 |
+
|
| 3 |
+
check_dirs := src tests examples docs scripts docker
|
| 4 |
+
|
| 5 |
+
# Check that source code meets quality standards
|
| 6 |
+
|
| 7 |
+
# this target runs checks on all files
|
| 8 |
+
quality:
|
| 9 |
+
ruff check $(check_dirs)
|
| 10 |
+
ruff format --check $(check_dirs)
|
| 11 |
+
doc-builder style src/peft tests docs/source --max_len 119 --check_only
|
| 12 |
+
|
| 13 |
+
# Format source code automatically and check is there are any problems left that need manual fixing
|
| 14 |
+
style:
|
| 15 |
+
ruff check --fix $(check_dirs)
|
| 16 |
+
ruff format $(check_dirs)
|
| 17 |
+
doc-builder style src/peft tests docs/source --max_len 119
|
| 18 |
+
|
| 19 |
+
test:
|
| 20 |
+
python -m pytest -n 3 tests/ $(if $(IS_GITHUB_CI),--report-log "ci_tests.log",)
|
| 21 |
+
|
| 22 |
+
tests_examples_multi_gpu:
|
| 23 |
+
python -m pytest -m multi_gpu_tests tests/test_gpu_examples.py $(if $(IS_GITHUB_CI),--report-log "multi_gpu_examples.log",)
|
| 24 |
+
|
| 25 |
+
tests_examples_single_gpu:
|
| 26 |
+
python -m pytest -m single_gpu_tests tests/test_gpu_examples.py $(if $(IS_GITHUB_CI),--report-log "single_gpu_examples.log",)
|
| 27 |
+
|
| 28 |
+
tests_core_multi_gpu:
|
| 29 |
+
python -m pytest -m multi_gpu_tests tests/test_common_gpu.py $(if $(IS_GITHUB_CI),--report-log "core_multi_gpu.log",)
|
| 30 |
+
|
| 31 |
+
tests_core_single_gpu:
|
| 32 |
+
python -m pytest -m single_gpu_tests tests/test_common_gpu.py $(if $(IS_GITHUB_CI),--report-log "core_single_gpu.log",)
|
| 33 |
+
|
| 34 |
+
# exclude gemma tests, as generation fails with torch.compile, these failures
|
| 35 |
+
# trigger side effects that make other tests fail with 'RuntimeError: Offset
|
| 36 |
+
# increment outside graph capture encountered unexpectedly.'
|
| 37 |
+
# TODO re-enable gemma once/if it is fixed
|
| 38 |
+
tests_common_gpu:
|
| 39 |
+
python -m pytest tests/test_decoder_models.py -k "not gemma" $(if $(IS_GITHUB_CI),--report-log "common_decoder.log",)
|
| 40 |
+
python -m pytest tests/test_encoder_decoder_models.py $(if $(IS_GITHUB_CI),--report-log "common_encoder_decoder.log",)
|
| 41 |
+
python -m pytest tests/test_gptqmodel.py $(if $(IS_GITHUB_CI),--report-log "gptqmodel_gpu.log",)
|
| 42 |
+
|
| 43 |
+
tests_examples_multi_gpu_bnb:
|
| 44 |
+
python -m pytest -m "multi_gpu_tests and bitsandbytes" tests/test_gpu_examples.py $(if $(IS_GITHUB_CI),--report-log "multi_gpu_examples.log",)
|
| 45 |
+
|
| 46 |
+
tests_examples_single_gpu_bnb:
|
| 47 |
+
python -m pytest -m "single_gpu_tests and bitsandbytes" tests/test_gpu_examples.py $(if $(IS_GITHUB_CI),--report-log "single_gpu_examples.log",)
|
| 48 |
+
|
| 49 |
+
tests_core_multi_gpu_bnb:
|
| 50 |
+
python -m pytest -m "multi_gpu_tests and bitsandbytes" tests/test_common_gpu.py $(if $(IS_GITHUB_CI),--report-log "core_multi_gpu.log",)
|
| 51 |
+
|
| 52 |
+
tests_core_single_gpu_bnb:
|
| 53 |
+
python -m pytest -m "single_gpu_tests and bitsandbytes" tests/test_common_gpu.py $(if $(IS_GITHUB_CI),--report-log "core_single_gpu.log",)
|
| 54 |
+
|
| 55 |
+
tests_gpu_bnb_regression:
|
| 56 |
+
python -m pytest tests/bnb/test_bnb_regression.py $(if $(IS_GITHUB_CI),--report-log "bnb_regression_gpu.log",)
|
| 57 |
+
|
| 58 |
+
# For testing transformers tests for bnb runners
|
| 59 |
+
transformers_tests:
|
| 60 |
+
RUN_SLOW=1 python -m pytest transformers-clone/tests/quantization/bnb $(if $(IS_GITHUB_CI),--report-log "transformers_tests.log",)
|
| 61 |
+
|
| 62 |
+
tests_regression:
|
| 63 |
+
python -m pytest -s --regression tests/regression/ $(if $(IS_GITHUB_CI),--report-log "regression_tests.log",)
|
| 64 |
+
|
| 65 |
+
tests_torch_compile:
|
| 66 |
+
python -m pytest tests/test_torch_compile.py $(if $(IS_GITHUB_CI),--report-log "compile_tests.log",)
|
peft/README.md
ADDED
|
@@ -0,0 +1,189 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!---
|
| 2 |
+
Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 3 |
+
|
| 4 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
| 5 |
+
you may not use this file except in compliance with the License.
|
| 6 |
+
You may obtain a copy of the License at
|
| 7 |
+
|
| 8 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 9 |
+
|
| 10 |
+
Unless required by applicable law or agreed to in writing, software
|
| 11 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
| 12 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 13 |
+
See the License for the specific language governing permissions and
|
| 14 |
+
limitations under the License.
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
<h1 align="center"> <p>🤗 PEFT</p></h1>
|
| 18 |
+
<h3 align="center">
|
| 19 |
+
<p>State-of-the-art Parameter-Efficient Fine-Tuning (PEFT) methods</p>
|
| 20 |
+
</h3>
|
| 21 |
+
|
| 22 |
+
Fine-tuning large pretrained models is often prohibitively costly due to their scale. Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of large pretrained models to various downstream applications by only fine-tuning a small number of (extra) model parameters instead of all the model's parameters. This significantly decreases the computational and storage costs. Recent state-of-the-art PEFT techniques achieve performance comparable to fully fine-tuned models.
|
| 23 |
+
|
| 24 |
+
PEFT is integrated with Transformers for easy model training and inference, Diffusers for conveniently managing different adapters, and Accelerate for distributed training and inference for really big models.
|
| 25 |
+
|
| 26 |
+
> [!TIP]
|
| 27 |
+
> Visit the [PEFT](https://huggingface.co/PEFT) organization to read about the PEFT methods implemented in the library and to see notebooks demonstrating how to apply these methods to a variety of downstream tasks. Click the "Watch repos" button on the organization page to be notified of newly implemented methods and notebooks!
|
| 28 |
+
|
| 29 |
+
Check the PEFT Adapters API Reference section for a list of supported PEFT methods, and read the [Adapters](https://huggingface.co/docs/peft/en/conceptual_guides/adapter), [Soft prompts](https://huggingface.co/docs/peft/en/conceptual_guides/prompting), and [IA3](https://huggingface.co/docs/peft/en/conceptual_guides/ia3) conceptual guides to learn more about how these methods work.
|
| 30 |
+
|
| 31 |
+
## Quickstart
|
| 32 |
+
|
| 33 |
+
Install PEFT from pip:
|
| 34 |
+
|
| 35 |
+
```bash
|
| 36 |
+
pip install peft
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
Prepare a model for training with a PEFT method such as LoRA by wrapping the base model and PEFT configuration with `get_peft_model`. For the bigscience/mt0-large model, you're only training 0.19% of the parameters!
|
| 40 |
+
|
| 41 |
+
```python
|
| 42 |
+
from transformers import AutoModelForCausalLM
|
| 43 |
+
from peft import LoraConfig, TaskType, get_peft_model
|
| 44 |
+
|
| 45 |
+
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
|
| 46 |
+
model_id = "Qwen/Qwen2.5-3B-Instruct"
|
| 47 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, device_map=device)
|
| 48 |
+
peft_config = LoraConfig(
|
| 49 |
+
r=16,
|
| 50 |
+
lora_alpha=32,
|
| 51 |
+
task_type=TaskType.CAUSAL_LM,
|
| 52 |
+
# target_modules=["q_proj", "v_proj", ...] # optionally indicate target modules
|
| 53 |
+
)
|
| 54 |
+
model = get_peft_model(model, peft_config)
|
| 55 |
+
model.print_trainable_parameters()
|
| 56 |
+
# prints: trainable params: 3,686,400 || all params: 3,089,625,088 || trainable%: 0.1193
|
| 57 |
+
|
| 58 |
+
# now perform training on your dataset, e.g. using transformers Trainer, then save the model
|
| 59 |
+
model.save_pretrained("qwen2.5-3b-lora")
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
To load a PEFT model for inference:
|
| 63 |
+
|
| 64 |
+
```python
|
| 65 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 66 |
+
from peft import PeftModel
|
| 67 |
+
|
| 68 |
+
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
|
| 69 |
+
model_id = "Qwen/Qwen2.5-3B-Instruct"
|
| 70 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 71 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, device_map=device)
|
| 72 |
+
model = PeftModel.from_pretrained(model, "qwen2.5-3b-lora")
|
| 73 |
+
|
| 74 |
+
inputs = tokenizer("Preheat the oven to 350 degrees and place the cookie dough", return_tensors="pt")
|
| 75 |
+
outputs = model.generate(**inputs.to(device), max_new_tokens=50)
|
| 76 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 77 |
+
|
| 78 |
+
# prints something like: Preheat the oven to 350 degrees and place the cookie dough in a baking dish [...]
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
## Why you should use PEFT
|
| 82 |
+
|
| 83 |
+
There are many benefits of using PEFT but the main one is the huge savings in compute and storage, making PEFT applicable to many different use cases.
|
| 84 |
+
|
| 85 |
+
### High performance on consumer hardware
|
| 86 |
+
|
| 87 |
+
Consider the memory requirements for training the following models on the [ought/raft/twitter_complaints](https://huggingface.co/datasets/ought/raft/viewer/twitter_complaints) dataset with an A100 80GB GPU with more than 64GB of CPU RAM.
|
| 88 |
+
|
| 89 |
+
| Model | Full Finetuning | PEFT-LoRA PyTorch | PEFT-LoRA DeepSpeed with CPU Offloading |
|
| 90 |
+
| --------- | ---- | ---- | ---- |
|
| 91 |
+
| bigscience/T0_3B (3B params) | 47.14GB GPU / 2.96GB CPU | 14.4GB GPU / 2.96GB CPU | 9.8GB GPU / 17.8GB CPU |
|
| 92 |
+
| bigscience/mt0-xxl (12B params) | OOM GPU | 56GB GPU / 3GB CPU | 22GB GPU / 52GB CPU |
|
| 93 |
+
| bigscience/bloomz-7b1 (7B params) | OOM GPU | 32GB GPU / 3.8GB CPU | 18.1GB GPU / 35GB CPU |
|
| 94 |
+
|
| 95 |
+
With LoRA you can fully finetune a 12B parameter model that would've otherwise run out of memory on the 80GB GPU, and comfortably fit and train a 3B parameter model. When you look at the 3B parameter model's performance, it is comparable to a fully finetuned model at a fraction of the GPU memory.
|
| 96 |
+
|
| 97 |
+
| Submission Name | Accuracy |
|
| 98 |
+
| --------- | ---- |
|
| 99 |
+
| Human baseline (crowdsourced) | 0.897 |
|
| 100 |
+
| Flan-T5 | 0.892 |
|
| 101 |
+
| lora-t0-3b | 0.863 |
|
| 102 |
+
|
| 103 |
+
> [!TIP]
|
| 104 |
+
> The bigscience/T0_3B model performance isn't optimized in the table above. You can squeeze even more performance out of it by playing around with the input instruction templates, LoRA hyperparameters, and other training related hyperparameters. The final checkpoint size of this model is just 19MB compared to 11GB of the full bigscience/T0_3B model. Learn more about the advantages of finetuning with PEFT in this [blog post](https://www.philschmid.de/fine-tune-flan-t5-peft).
|
| 105 |
+
|
| 106 |
+
### Quantization
|
| 107 |
+
|
| 108 |
+
Quantization is another method for reducing the memory requirements of a model by representing the data in a lower precision. It can be combined with PEFT methods to make it even easier to train and load LLMs for inference.
|
| 109 |
+
|
| 110 |
+
* Learn how to finetune [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) with QLoRA and the [TRL](https://huggingface.co/docs/trl/index) library on a 16GB GPU in the [Finetune LLMs on your own consumer hardware using tools from PyTorch and Hugging Face ecosystem](https://pytorch.org/blog/finetune-llms/) blog post.
|
| 111 |
+
* Learn how to finetune a [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) model for multilingual automatic speech recognition with LoRA and 8-bit quantization in this [notebook](https://colab.research.google.com/drive/1DOkD_5OUjFa0r5Ik3SgywJLJtEo2qLxO?usp=sharing) (see this [notebook](https://colab.research.google.com/drive/1vhF8yueFqha3Y3CpTHN6q9EVcII9EYzs?usp=sharing) instead for an example of streaming a dataset).
|
| 112 |
+
|
| 113 |
+
### Save compute and storage
|
| 114 |
+
|
| 115 |
+
PEFT can help you save storage by avoiding full finetuning of models on each of downstream task or dataset. In many cases, you're only finetuning a very small fraction of a model's parameters and each checkpoint is only a few MBs in size (instead of GBs). These smaller PEFT adapters demonstrate performance comparable to a fully finetuned model. If you have many datasets, you can save a lot of storage with a PEFT model and not have to worry about catastrophic forgetting or overfitting the backbone or base model.
|
| 116 |
+
|
| 117 |
+
## PEFT integrations
|
| 118 |
+
|
| 119 |
+
PEFT is widely supported across the Hugging Face ecosystem because of the massive efficiency it brings to training and inference.
|
| 120 |
+
|
| 121 |
+
### Diffusers
|
| 122 |
+
|
| 123 |
+
The iterative diffusion process consumes a lot of memory which can make it difficult to train. PEFT can help reduce the memory requirements and reduce the storage size of the final model checkpoint. For example, consider the memory required for training a Stable Diffusion model with LoRA on an A100 80GB GPU with more than 64GB of CPU RAM. The final model checkpoint size is only 8.8MB!
|
| 124 |
+
|
| 125 |
+
| Model | Full Finetuning | PEFT-LoRA | PEFT-LoRA with Gradient Checkpointing |
|
| 126 |
+
| --------- | ---- | ---- | ---- |
|
| 127 |
+
| CompVis/stable-diffusion-v1-4 | 27.5GB GPU / 3.97GB CPU | 15.5GB GPU / 3.84GB CPU | 8.12GB GPU / 3.77GB CPU |
|
| 128 |
+
|
| 129 |
+
> [!TIP]
|
| 130 |
+
> Take a look at the [examples/lora_dreambooth/train_dreambooth.py](examples/lora_dreambooth/train_dreambooth.py) training script to try training your own Stable Diffusion model with LoRA, and play around with the [smangrul/peft-lora-sd-dreambooth](https://huggingface.co/spaces/smangrul/peft-lora-sd-dreambooth) Space which is running on a T4 instance. Learn more about the PEFT integration in Diffusers in this [tutorial](https://huggingface.co/docs/peft/main/en/tutorial/peft_integrations#diffusers).
|
| 131 |
+
|
| 132 |
+
### Transformers
|
| 133 |
+
|
| 134 |
+
PEFT is directly integrated with [Transformers](https://huggingface.co/docs/transformers/main/en/peft). After loading a model, call `add_adapter` to add a new PEFT adapter to the model:
|
| 135 |
+
|
| 136 |
+
```python
|
| 137 |
+
from peft import LoraConfig
|
| 138 |
+
model = ... # transformers model
|
| 139 |
+
peft_config = LoraConfig(...)
|
| 140 |
+
model.add_adapter(lora_config, adapter_name="lora_1")
|
| 141 |
+
```
|
| 142 |
+
|
| 143 |
+
To load a trained PEFT adapter, call `load_adapter`:
|
| 144 |
+
|
| 145 |
+
```python
|
| 146 |
+
model = ... # transformers model
|
| 147 |
+
model.load_adapter(<path-to-adapter>, adapter_name="lora_1")
|
| 148 |
+
```
|
| 149 |
+
|
| 150 |
+
And to switch between different adapters, call `set_adapter`:
|
| 151 |
+
|
| 152 |
+
```python
|
| 153 |
+
model.set_adapter("lora_2")
|
| 154 |
+
```
|
| 155 |
+
|
| 156 |
+
The Transformers integration doesn't include all the functionalities offered in PEFT, such as methods for merging the adapter into the base model.
|
| 157 |
+
|
| 158 |
+
### Accelerate
|
| 159 |
+
|
| 160 |
+
[Accelerate](https://huggingface.co/docs/accelerate/index) is a library for distributed training and inference on various training setups and hardware (GPUs, TPUs, Apple Silicon, etc.). PEFT models work with Accelerate out of the box, making it really convenient to train really large models or use them for inference on consumer hardware with limited resources.
|
| 161 |
+
|
| 162 |
+
### TRL
|
| 163 |
+
|
| 164 |
+
PEFT can also be applied to training LLMs with RLHF components such as the ranker and policy. Get started by reading:
|
| 165 |
+
|
| 166 |
+
* [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) with PEFT and the [TRL](https://huggingface.co/docs/trl/index) library to learn more about the Direct Preference Optimization (DPO) method and how to apply it to a LLM.
|
| 167 |
+
* [Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU](https://huggingface.co/blog/trl-peft) with PEFT and the [TRL](https://huggingface.co/docs/trl/index) library, and then try out the [gpt2-sentiment_peft.ipynb](https://github.com/huggingface/trl/blob/main/examples/notebooks/gpt2-sentiment.ipynb) notebook to optimize GPT2 to generate positive movie reviews.
|
| 168 |
+
* [StackLLaMA: A hands-on guide to train LLaMA with RLHF](https://huggingface.co/blog/stackllama) with PEFT, and then try out the [stack_llama/scripts](https://github.com/huggingface/trl/tree/main/examples/research_projects/stack_llama/scripts) for supervised finetuning, reward modeling, and RL finetuning.
|
| 169 |
+
|
| 170 |
+
## Model support
|
| 171 |
+
|
| 172 |
+
Use this [Space](https://stevhliu-peft-methods.hf.space) or check out the [docs](https://huggingface.co/docs/peft/main/en/index) to find which models officially support a PEFT method out of the box. Even if you don't see a model listed below, you can manually configure the model config to enable PEFT for a model. Read the [New transformers architecture](https://huggingface.co/docs/peft/main/en/developer_guides/custom_models#new-transformers-architectures) guide to learn how.
|
| 173 |
+
|
| 174 |
+
## Contribute
|
| 175 |
+
|
| 176 |
+
If you would like to contribute to PEFT, please check out our [contribution guide](https://huggingface.co/docs/peft/developer_guides/contributing).
|
| 177 |
+
|
| 178 |
+
## Citing 🤗 PEFT
|
| 179 |
+
|
| 180 |
+
To use 🤗 PEFT in your publication, please cite it by using the following BibTeX entry.
|
| 181 |
+
|
| 182 |
+
```bibtex
|
| 183 |
+
@Misc{peft,
|
| 184 |
+
title = {{PEFT}: State-of-the-art Parameter-Efficient Fine-Tuning methods},
|
| 185 |
+
author = {Sourab Mangrulkar and Sylvain Gugger and Lysandre Debut and Younes Belkada and Sayak Paul and Benjamin Bossan},
|
| 186 |
+
howpublished = {\url{https://github.com/huggingface/peft}},
|
| 187 |
+
year = {2022}
|
| 188 |
+
}
|
| 189 |
+
```
|
peft/docker/README.md
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# PEFT Docker images
|
| 2 |
+
|
| 3 |
+
Here we store all PEFT Docker images used in our testing infrastructure. We use python 3.11 for now on all our images.
|
| 4 |
+
|
| 5 |
+
- `peft-cpu`: PEFT compiled on CPU with all other HF libraries installed on main branch
|
| 6 |
+
- `peft-gpu`: PEFT complied for NVIDIA GPUs with all other HF libraries installed on main branch
|
| 7 |
+
- `peft-gpu-bnb-source`: PEFT complied for NVIDIA GPUs with `bitsandbytes` and all other HF libraries installed from main branch
|
| 8 |
+
- `peft-gpu-bnb-latest`: PEFT complied for NVIDIA GPUs with `bitsandbytes` complied from main and all other HF libraries installed from latest PyPi
|
peft/docker/peft-cpu/Dockerfile
ADDED
|
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Builds GPU docker image of PyTorch
|
| 2 |
+
# Uses multi-staged approach to reduce size
|
| 3 |
+
# Stage 1
|
| 4 |
+
# Use base conda image to reduce time
|
| 5 |
+
FROM continuumio/miniconda3:latest AS compile-image
|
| 6 |
+
# Specify py version
|
| 7 |
+
ENV PYTHON_VERSION=3.11
|
| 8 |
+
# Install apt libs - copied from https://github.com/huggingface/accelerate/blob/main/docker/accelerate-gpu/Dockerfile
|
| 9 |
+
RUN apt-get update && \
|
| 10 |
+
apt-get install -y curl git wget software-properties-common git-lfs && \
|
| 11 |
+
apt-get clean && \
|
| 12 |
+
rm -rf /var/lib/apt/lists*
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
# Install audio-related libraries
|
| 16 |
+
RUN apt-get update && \
|
| 17 |
+
apt install -y ffmpeg
|
| 18 |
+
|
| 19 |
+
RUN apt install -y libsndfile1-dev
|
| 20 |
+
RUN git lfs install
|
| 21 |
+
|
| 22 |
+
# Create our conda env - copied from https://github.com/huggingface/accelerate/blob/main/docker/accelerate-gpu/Dockerfile
|
| 23 |
+
RUN conda create --name peft python=${PYTHON_VERSION} ipython jupyter pip
|
| 24 |
+
RUN python3 -m pip install --no-cache-dir --upgrade pip
|
| 25 |
+
|
| 26 |
+
# Below is copied from https://github.com/huggingface/accelerate/blob/main/docker/accelerate-gpu/Dockerfile
|
| 27 |
+
# We don't install pytorch here yet since CUDA isn't available
|
| 28 |
+
# instead we use the direct torch wheel
|
| 29 |
+
ENV PATH /opt/conda/envs/peft/bin:$PATH
|
| 30 |
+
# Activate our bash shell
|
| 31 |
+
RUN chsh -s /bin/bash
|
| 32 |
+
SHELL ["/bin/bash", "-c"]
|
| 33 |
+
# Activate the conda env and install transformers + accelerate from source
|
| 34 |
+
RUN source activate peft && \
|
| 35 |
+
python3 -m pip install --no-cache-dir \
|
| 36 |
+
librosa \
|
| 37 |
+
"soundfile>=0.12.1" \
|
| 38 |
+
scipy \
|
| 39 |
+
git+https://github.com/huggingface/transformers \
|
| 40 |
+
git+https://github.com/huggingface/accelerate \
|
| 41 |
+
peft[test]@git+https://github.com/huggingface/peft
|
| 42 |
+
|
| 43 |
+
# Install apt libs
|
| 44 |
+
RUN apt-get update && \
|
| 45 |
+
apt-get install -y curl git wget && \
|
| 46 |
+
apt-get clean && \
|
| 47 |
+
rm -rf /var/lib/apt/lists*
|
| 48 |
+
|
| 49 |
+
RUN echo "source activate peft" >> ~/.profile
|
| 50 |
+
|
| 51 |
+
# Activate the virtualenv
|
| 52 |
+
CMD ["/bin/bash"]
|
peft/docker/peft-gpu-bnb-latest/Dockerfile
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Builds GPU docker image of PyTorch
|
| 2 |
+
# Uses multi-staged approach to reduce size
|
| 3 |
+
# Stage 1
|
| 4 |
+
# Use base conda image to reduce time
|
| 5 |
+
FROM continuumio/miniconda3:latest AS compile-image
|
| 6 |
+
# Specify py version
|
| 7 |
+
ENV PYTHON_VERSION=3.11
|
| 8 |
+
# Install apt libs - copied from https://github.com/huggingface/accelerate/blob/main/docker/accelerate-gpu/Dockerfile
|
| 9 |
+
RUN apt-get update && \
|
| 10 |
+
apt-get install -y curl git wget software-properties-common git-lfs && \
|
| 11 |
+
apt-get clean && \
|
| 12 |
+
rm -rf /var/lib/apt/lists*
|
| 13 |
+
|
| 14 |
+
# Install audio-related libraries
|
| 15 |
+
RUN apt-get update && \
|
| 16 |
+
apt install -y ffmpeg
|
| 17 |
+
|
| 18 |
+
RUN apt install -y libsndfile1-dev
|
| 19 |
+
RUN git lfs install
|
| 20 |
+
|
| 21 |
+
# Create our conda env - copied from https://github.com/huggingface/accelerate/blob/main/docker/accelerate-gpu/Dockerfile
|
| 22 |
+
RUN conda create --name peft python=${PYTHON_VERSION} ipython jupyter pip
|
| 23 |
+
RUN python3 -m pip install --no-cache-dir --upgrade pip
|
| 24 |
+
|
| 25 |
+
# Below is copied from https://github.com/huggingface/accelerate/blob/main/docker/accelerate-gpu/Dockerfile
|
| 26 |
+
# We don't install pytorch here yet since CUDA isn't available
|
| 27 |
+
# instead we use the direct torch wheel
|
| 28 |
+
ENV PATH /opt/conda/envs/peft/bin:$PATH
|
| 29 |
+
# Activate our bash shell
|
| 30 |
+
RUN chsh -s /bin/bash
|
| 31 |
+
SHELL ["/bin/bash", "-c"]
|
| 32 |
+
|
| 33 |
+
# Stage 2
|
| 34 |
+
FROM nvidia/cuda:12.6.3-devel-ubuntu22.04 AS build-image
|
| 35 |
+
COPY --from=compile-image /opt/conda /opt/conda
|
| 36 |
+
ENV PATH /opt/conda/bin:$PATH
|
| 37 |
+
|
| 38 |
+
RUN chsh -s /bin/bash
|
| 39 |
+
SHELL ["/bin/bash", "-c"]
|
| 40 |
+
|
| 41 |
+
# Install apt libs
|
| 42 |
+
RUN apt-get update && \
|
| 43 |
+
apt-get install -y curl git wget cmake && \
|
| 44 |
+
apt-get clean && \
|
| 45 |
+
rm -rf /var/lib/apt/lists*
|
| 46 |
+
|
| 47 |
+
# Activate the conda env and install transformers + accelerate from latest pypi
|
| 48 |
+
# Also clone BNB and build it from source.
|
| 49 |
+
RUN source activate peft && \
|
| 50 |
+
python3 -m pip install -U --no-cache-dir \
|
| 51 |
+
librosa \
|
| 52 |
+
"soundfile>=0.12.1" \
|
| 53 |
+
scipy \
|
| 54 |
+
transformers \
|
| 55 |
+
accelerate \
|
| 56 |
+
peft \
|
| 57 |
+
optimum \
|
| 58 |
+
auto-gptq && \
|
| 59 |
+
git clone https://github.com/bitsandbytes-foundation/bitsandbytes && cd bitsandbytes && \
|
| 60 |
+
cmake -B . -DCOMPUTE_BACKEND=cuda -S . && \
|
| 61 |
+
cmake --build . && \
|
| 62 |
+
pip install -e . && \
|
| 63 |
+
pip freeze | grep bitsandbytes
|
| 64 |
+
|
| 65 |
+
RUN echo "source activate peft" >> ~/.profile
|
| 66 |
+
|
| 67 |
+
# Activate the virtualenv
|
| 68 |
+
CMD ["/bin/bash"]
|
peft/docker/peft-gpu-bnb-source/Dockerfile
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Builds GPU docker image of PyTorch
|
| 2 |
+
# Uses multi-staged approach to reduce size
|
| 3 |
+
# Stage 1
|
| 4 |
+
# Use base conda image to reduce time
|
| 5 |
+
FROM continuumio/miniconda3:latest AS compile-image
|
| 6 |
+
# Specify py version
|
| 7 |
+
ENV PYTHON_VERSION=3.11
|
| 8 |
+
# Install apt libs - copied from https://github.com/huggingface/accelerate/blob/main/docker/accelerate-gpu/Dockerfile
|
| 9 |
+
RUN apt-get update && \
|
| 10 |
+
apt-get install -y curl git wget software-properties-common git-lfs && \
|
| 11 |
+
apt-get clean && \
|
| 12 |
+
rm -rf /var/lib/apt/lists*
|
| 13 |
+
|
| 14 |
+
# Install audio-related libraries
|
| 15 |
+
RUN apt-get update && \
|
| 16 |
+
apt install -y ffmpeg
|
| 17 |
+
|
| 18 |
+
RUN apt install -y libsndfile1-dev
|
| 19 |
+
RUN git lfs install
|
| 20 |
+
|
| 21 |
+
# Create our conda env - copied from https://github.com/huggingface/accelerate/blob/main/docker/accelerate-gpu/Dockerfile
|
| 22 |
+
RUN conda create --name peft python=${PYTHON_VERSION} ipython jupyter pip
|
| 23 |
+
RUN python3 -m pip install --no-cache-dir --upgrade pip
|
| 24 |
+
|
| 25 |
+
# Below is copied from https://github.com/huggingface/accelerate/blob/main/docker/accelerate-gpu/Dockerfile
|
| 26 |
+
# We don't install pytorch here yet since CUDA isn't available
|
| 27 |
+
# instead we use the direct torch wheel
|
| 28 |
+
ENV PATH /opt/conda/envs/peft/bin:$PATH
|
| 29 |
+
# Activate our bash shell
|
| 30 |
+
RUN chsh -s /bin/bash
|
| 31 |
+
SHELL ["/bin/bash", "-c"]
|
| 32 |
+
|
| 33 |
+
# Stage 2
|
| 34 |
+
FROM nvidia/cuda:12.6.3-devel-ubuntu22.04 AS build-image
|
| 35 |
+
COPY --from=compile-image /opt/conda /opt/conda
|
| 36 |
+
ENV PATH /opt/conda/bin:$PATH
|
| 37 |
+
|
| 38 |
+
RUN chsh -s /bin/bash
|
| 39 |
+
SHELL ["/bin/bash", "-c"]
|
| 40 |
+
|
| 41 |
+
# Install apt libs
|
| 42 |
+
RUN apt-get update && \
|
| 43 |
+
apt-get install -y curl git wget cmake && \
|
| 44 |
+
apt-get clean && \
|
| 45 |
+
rm -rf /var/lib/apt/lists*
|
| 46 |
+
|
| 47 |
+
# Activate the conda env and install transformers + accelerate from source
|
| 48 |
+
# Also clone BNB and build it from source.
|
| 49 |
+
RUN source activate peft && \
|
| 50 |
+
python3 -m pip install -U --no-cache-dir \
|
| 51 |
+
librosa \
|
| 52 |
+
"soundfile>=0.12.1" \
|
| 53 |
+
scipy \
|
| 54 |
+
git+https://github.com/huggingface/transformers \
|
| 55 |
+
git+https://github.com/huggingface/accelerate \
|
| 56 |
+
peft[test]@git+https://github.com/huggingface/peft \
|
| 57 |
+
optimum \
|
| 58 |
+
auto-gptq && \
|
| 59 |
+
git clone https://github.com/bitsandbytes-foundation/bitsandbytes && cd bitsandbytes && \
|
| 60 |
+
cmake -B . -DCOMPUTE_BACKEND=cuda -S . && \
|
| 61 |
+
cmake --build . && \
|
| 62 |
+
pip install -e . && \
|
| 63 |
+
pip freeze | grep bitsandbytes
|
| 64 |
+
|
| 65 |
+
RUN echo "source activate peft" >> ~/.profile
|
| 66 |
+
|
| 67 |
+
# Activate the virtualenv
|
| 68 |
+
CMD ["/bin/bash"]
|
peft/docker/peft-gpu/Dockerfile
ADDED
|
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Builds GPU docker image of PyTorch
|
| 2 |
+
# Uses multi-staged approach to reduce size
|
| 3 |
+
# Stage 1
|
| 4 |
+
# Use base conda image to reduce time
|
| 5 |
+
FROM continuumio/miniconda3:latest AS compile-image
|
| 6 |
+
# Specify py version
|
| 7 |
+
ENV PYTHON_VERSION=3.11
|
| 8 |
+
# Install apt libs - copied from https://github.com/huggingface/accelerate/blob/main/docker/accelerate-gpu/Dockerfile
|
| 9 |
+
# Install audio-related libraries
|
| 10 |
+
RUN apt-get update && \
|
| 11 |
+
apt-get install -y curl git wget software-properties-common git-lfs ffmpeg libsndfile1-dev && \
|
| 12 |
+
apt-get clean && \
|
| 13 |
+
rm -rf /var/lib/apt/lists*
|
| 14 |
+
|
| 15 |
+
RUN git lfs install
|
| 16 |
+
|
| 17 |
+
# Create our conda env - copied from https://github.com/huggingface/accelerate/blob/main/docker/accelerate-gpu/Dockerfile
|
| 18 |
+
RUN conda create --name peft python=${PYTHON_VERSION} ipython jupyter pip
|
| 19 |
+
|
| 20 |
+
# Below is copied from https://github.com/huggingface/accelerate/blob/main/docker/accelerate-gpu/Dockerfile
|
| 21 |
+
# We don't install pytorch here yet since CUDA isn't available
|
| 22 |
+
# instead we use the direct torch wheel
|
| 23 |
+
ENV PATH /opt/conda/envs/peft/bin:$PATH
|
| 24 |
+
# Activate our bash shell
|
| 25 |
+
RUN chsh -s /bin/bash
|
| 26 |
+
SHELL ["/bin/bash", "-c"]
|
| 27 |
+
|
| 28 |
+
# Stage 2
|
| 29 |
+
FROM nvidia/cuda:12.4.1-devel-ubuntu22.04 AS build-image
|
| 30 |
+
COPY --from=compile-image /opt/conda /opt/conda
|
| 31 |
+
ENV PATH /opt/conda/bin:$PATH
|
| 32 |
+
|
| 33 |
+
# Install apt libs
|
| 34 |
+
RUN apt-get update && \
|
| 35 |
+
apt-get install -y curl git wget && \
|
| 36 |
+
apt-get clean && \
|
| 37 |
+
rm -rf /var/lib/apt/lists*
|
| 38 |
+
|
| 39 |
+
RUN chsh -s /bin/bash
|
| 40 |
+
SHELL ["/bin/bash", "-c"]
|
| 41 |
+
RUN source activate peft && \
|
| 42 |
+
python3 -m pip install --no-cache-dir bitsandbytes optimum auto-gptq && \
|
| 43 |
+
# Add autoawq for quantization testing
|
| 44 |
+
python3 -m pip install --no-cache-dir https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.7.post2/autoawq-0.2.7.post2-py3-none-any.whl && \
|
| 45 |
+
python3 -m pip install --no-cache-dir https://github.com/casper-hansen/AutoAWQ_kernels/releases/download/v0.0.9/autoawq_kernels-0.0.9-cp311-cp311-linux_x86_64.whl && \
|
| 46 |
+
# Add eetq for quantization testing
|
| 47 |
+
python3 -m pip install git+https://github.com/NetEase-FuXi/EETQ.git
|
| 48 |
+
|
| 49 |
+
# Activate the conda env and install transformers + accelerate from source
|
| 50 |
+
RUN source activate peft && \
|
| 51 |
+
python3 -m pip install -U --no-cache-dir \
|
| 52 |
+
librosa \
|
| 53 |
+
"soundfile>=0.12.1" \
|
| 54 |
+
scipy \
|
| 55 |
+
torchao \
|
| 56 |
+
git+https://github.com/huggingface/transformers \
|
| 57 |
+
git+https://github.com/huggingface/accelerate \
|
| 58 |
+
peft[test]@git+https://github.com/huggingface/peft \
|
| 59 |
+
# Add aqlm for quantization testing
|
| 60 |
+
aqlm[gpu]>=1.0.2 \
|
| 61 |
+
# Add HQQ for quantization testing
|
| 62 |
+
hqq
|
| 63 |
+
|
| 64 |
+
RUN source activate peft && \
|
| 65 |
+
pip freeze | grep transformers
|
| 66 |
+
|
| 67 |
+
RUN echo "source activate peft" >> ~/.profile
|
| 68 |
+
|
| 69 |
+
# Activate the virtualenv
|
| 70 |
+
CMD ["/bin/bash"]
|
peft/docs/Makefile
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Minimal makefile for Sphinx documentation
|
| 2 |
+
#
|
| 3 |
+
|
| 4 |
+
# You can set these variables from the command line.
|
| 5 |
+
SPHINXOPTS =
|
| 6 |
+
SPHINXBUILD = sphinx-build
|
| 7 |
+
SOURCEDIR = source
|
| 8 |
+
BUILDDIR = _build
|
| 9 |
+
|
| 10 |
+
# Put it first so that "make" without argument is like "make help".
|
| 11 |
+
help:
|
| 12 |
+
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
| 13 |
+
|
| 14 |
+
.PHONY: help Makefile
|
| 15 |
+
|
| 16 |
+
# Catch-all target: route all unknown targets to Sphinx using the new
|
| 17 |
+
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
| 18 |
+
%: Makefile
|
| 19 |
+
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
peft/docs/README.md
ADDED
|
@@ -0,0 +1,267 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!---
|
| 2 |
+
Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 3 |
+
|
| 4 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
| 5 |
+
you may not use this file except in compliance with the License.
|
| 6 |
+
You may obtain a copy of the License at
|
| 7 |
+
|
| 8 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 9 |
+
|
| 10 |
+
Unless required by applicable law or agreed to in writing, software
|
| 11 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
| 12 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 13 |
+
See the License for the specific language governing permissions and
|
| 14 |
+
limitations under the License.
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# Generating the documentation
|
| 18 |
+
|
| 19 |
+
To generate the documentation, you first have to build it. Several packages are necessary to build the doc,
|
| 20 |
+
you can install them with the following command, at the root of the code repository:
|
| 21 |
+
|
| 22 |
+
```bash
|
| 23 |
+
pip install -e ".[docs]"
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
Then you need to install our special tool that builds the documentation:
|
| 27 |
+
|
| 28 |
+
```bash
|
| 29 |
+
pip install git+https://github.com/huggingface/doc-builder
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
---
|
| 33 |
+
**NOTE**
|
| 34 |
+
|
| 35 |
+
You only need to generate the documentation to inspect it locally (if you're planning changes and want to
|
| 36 |
+
check how they look before committing for instance). You don't have to commit to the built documentation.
|
| 37 |
+
|
| 38 |
+
---
|
| 39 |
+
|
| 40 |
+
## Building the documentation
|
| 41 |
+
|
| 42 |
+
Once you have setup the `doc-builder` and additional packages, you can generate the documentation by
|
| 43 |
+
typing the following command:
|
| 44 |
+
|
| 45 |
+
```bash
|
| 46 |
+
doc-builder build peft docs/source/ --build_dir ~/tmp/test-build
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
You can adapt the `--build_dir` to set any temporary folder you prefer. This command will create it and generate
|
| 50 |
+
the MDX files that will be rendered as the documentation on the main website. You can inspect them in your favorite
|
| 51 |
+
Markdown editor.
|
| 52 |
+
|
| 53 |
+
## Previewing the documentation
|
| 54 |
+
|
| 55 |
+
To preview the docs, first install the `watchdog` module with:
|
| 56 |
+
|
| 57 |
+
```bash
|
| 58 |
+
pip install watchdog
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
Then run the following command:
|
| 62 |
+
|
| 63 |
+
```bash
|
| 64 |
+
doc-builder preview {package_name} {path_to_docs}
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
For example:
|
| 68 |
+
|
| 69 |
+
```bash
|
| 70 |
+
doc-builder preview peft docs/source
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
**NOTE**
|
| 77 |
+
|
| 78 |
+
The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
|
| 79 |
+
|
| 80 |
+
---
|
| 81 |
+
|
| 82 |
+
## Adding a new element to the navigation bar
|
| 83 |
+
|
| 84 |
+
Accepted files are Markdown (.md or .mdx).
|
| 85 |
+
|
| 86 |
+
Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
|
| 87 |
+
the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/peft/blob/main/docs/source/_toctree.yml) file.
|
| 88 |
+
|
| 89 |
+
## Renaming section headers and moving sections
|
| 90 |
+
|
| 91 |
+
It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
|
| 92 |
+
|
| 93 |
+
Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
|
| 94 |
+
|
| 95 |
+
So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
|
| 96 |
+
|
| 97 |
+
```
|
| 98 |
+
Sections that were moved:
|
| 99 |
+
|
| 100 |
+
[ <a href="#section-b">Section A</a><a id="section-a"></a> ]
|
| 101 |
+
```
|
| 102 |
+
and of course, if you moved it to another file, then:
|
| 103 |
+
|
| 104 |
+
```
|
| 105 |
+
Sections that were moved:
|
| 106 |
+
|
| 107 |
+
[ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]
|
| 108 |
+
```
|
| 109 |
+
|
| 110 |
+
Use the relative style to link to the new file so that the versioned docs continue to work.
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
## Writing Documentation - Specification
|
| 114 |
+
|
| 115 |
+
The `huggingface/peft` documentation follows the
|
| 116 |
+
[Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings,
|
| 117 |
+
although we can write them directly in Markdown.
|
| 118 |
+
|
| 119 |
+
### Adding a new tutorial
|
| 120 |
+
|
| 121 |
+
Adding a new tutorial or section is done in two steps:
|
| 122 |
+
|
| 123 |
+
- Add a new file under `./source`. This file can either be ReStructuredText (.rst) or Markdown (.md).
|
| 124 |
+
- Link that file in `./source/_toctree.yml` on the correct toc-tree.
|
| 125 |
+
|
| 126 |
+
Make sure to put your new file under the proper section. It's unlikely to go in the first section (*Get Started*), so
|
| 127 |
+
depending on the intended targets (beginners, more advanced users, or researchers) it should go into sections two, three, or
|
| 128 |
+
four.
|
| 129 |
+
|
| 130 |
+
### Writing source documentation
|
| 131 |
+
|
| 132 |
+
Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
|
| 133 |
+
and objects like True, None, or any strings should usually be put in `code`.
|
| 134 |
+
|
| 135 |
+
When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool
|
| 136 |
+
adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`function\`\]. This requires the class or
|
| 137 |
+
function to be in the main package.
|
| 138 |
+
|
| 139 |
+
If you want to create a link to some internal class or function, you need to
|
| 140 |
+
provide its path. For instance: \[\`utils.gather\`\]. This will be converted into a link with
|
| 141 |
+
`utils.gather` in the description. To get rid of the path and only keep the name of the object you are
|
| 142 |
+
linking to in the description, add a ~: \[\`~utils.gather\`\] will generate a link with `gather` in the description.
|
| 143 |
+
|
| 144 |
+
The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\].
|
| 145 |
+
|
| 146 |
+
#### Defining arguments in a method
|
| 147 |
+
|
| 148 |
+
Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and
|
| 149 |
+
an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its
|
| 150 |
+
description:
|
| 151 |
+
|
| 152 |
+
```
|
| 153 |
+
Args:
|
| 154 |
+
n_layers (`int`): The number of layers of the model.
|
| 155 |
+
```
|
| 156 |
+
|
| 157 |
+
If the description is too long to fit in one line (more than 119 characters in total), another indentation is necessary
|
| 158 |
+
before writing the description after the argument.
|
| 159 |
+
|
| 160 |
+
Finally, to maintain uniformity if any *one* description is too long to fit on one line, the
|
| 161 |
+
rest of the parameters should follow suit and have an indention before their description.
|
| 162 |
+
|
| 163 |
+
Here's an example showcasing everything so far:
|
| 164 |
+
|
| 165 |
+
```
|
| 166 |
+
Args:
|
| 167 |
+
gradient_accumulation_steps (`int`, *optional*, default to 1):
|
| 168 |
+
The number of steps that should pass before gradients are accumulated. A number > 1 should be combined with `Accelerator.accumulate`.
|
| 169 |
+
cpu (`bool`, *optional*):
|
| 170 |
+
Whether or not to force the script to execute on CPU. Will ignore GPU available if set to `True` and force the execution on one process only.
|
| 171 |
+
```
|
| 172 |
+
|
| 173 |
+
For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the
|
| 174 |
+
following signature:
|
| 175 |
+
|
| 176 |
+
```
|
| 177 |
+
def my_function(x: str = None, a: float = 1):
|
| 178 |
+
```
|
| 179 |
+
|
| 180 |
+
then its documentation should look like this:
|
| 181 |
+
|
| 182 |
+
```
|
| 183 |
+
Args:
|
| 184 |
+
x (`str`, *optional*):
|
| 185 |
+
This argument controls ... and has a description longer than 119 chars.
|
| 186 |
+
a (`float`, *optional*, defaults to 1):
|
| 187 |
+
This argument is used to ... and has a description longer than 119 chars.
|
| 188 |
+
```
|
| 189 |
+
|
| 190 |
+
Note that we always omit the "defaults to \`None\`" when None is the default for any argument. Also note that even
|
| 191 |
+
if the first line describing your argument type and its default gets long, you can't break it into several lines. You can
|
| 192 |
+
however write as many lines as you want in the indented description (see the example above with `input_ids`).
|
| 193 |
+
|
| 194 |
+
#### Writing a multi-line code block
|
| 195 |
+
|
| 196 |
+
Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
|
| 197 |
+
|
| 198 |
+
|
| 199 |
+
````
|
| 200 |
+
```python
|
| 201 |
+
# first line of code
|
| 202 |
+
# second line
|
| 203 |
+
# etc
|
| 204 |
+
```
|
| 205 |
+
````
|
| 206 |
+
|
| 207 |
+
#### Writing a return block
|
| 208 |
+
|
| 209 |
+
The return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation.
|
| 210 |
+
The first line should be the type of the return, followed by a line return. No need to indent further for the elements
|
| 211 |
+
building the return.
|
| 212 |
+
|
| 213 |
+
Here's an example of a single value return:
|
| 214 |
+
|
| 215 |
+
```
|
| 216 |
+
Returns:
|
| 217 |
+
`List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.
|
| 218 |
+
```
|
| 219 |
+
|
| 220 |
+
Here's an example of a tuple return, comprising several objects:
|
| 221 |
+
|
| 222 |
+
```
|
| 223 |
+
Returns:
|
| 224 |
+
`tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
|
| 225 |
+
- ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --
|
| 226 |
+
Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
|
| 227 |
+
- **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
|
| 228 |
+
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
|
| 229 |
+
```
|
| 230 |
+
|
| 231 |
+
## Styling the docstring
|
| 232 |
+
|
| 233 |
+
We have an automatic script running with the `make style` comment that will make sure that:
|
| 234 |
+
- the docstrings fully take advantage of the line width
|
| 235 |
+
- all code examples are formatted using black, like the code of the Transformers library
|
| 236 |
+
|
| 237 |
+
This script may have some weird failures if you make a syntax mistake or if you uncover a bug. Therefore, it's
|
| 238 |
+
recommended to commit your changes before running `make style`, so you can revert the changes done by that script
|
| 239 |
+
easily.
|
| 240 |
+
|
| 241 |
+
## Writing documentation examples
|
| 242 |
+
|
| 243 |
+
The syntax, for example, docstrings can look as follows:
|
| 244 |
+
|
| 245 |
+
```
|
| 246 |
+
Example:
|
| 247 |
+
|
| 248 |
+
```python
|
| 249 |
+
>>> import time
|
| 250 |
+
>>> from accelerate import Accelerator
|
| 251 |
+
>>> accelerator = Accelerator()
|
| 252 |
+
>>> if accelerator.is_main_process:
|
| 253 |
+
... time.sleep(2)
|
| 254 |
+
>>> else:
|
| 255 |
+
... print("I'm waiting for the main process to finish its sleep...")
|
| 256 |
+
>>> accelerator.wait_for_everyone()
|
| 257 |
+
>>> # Should print on every process at the same time
|
| 258 |
+
>>> print("Everyone is here")
|
| 259 |
+
```
|
| 260 |
+
```
|
| 261 |
+
|
| 262 |
+
The docstring should give a minimal, clear example of how the respective function
|
| 263 |
+
is to be used in inference and also include the expected (ideally sensible)
|
| 264 |
+
output.
|
| 265 |
+
Often, readers will try out the example before even going through the function
|
| 266 |
+
or class definitions. Therefore, it is of utmost importance that the example
|
| 267 |
+
works as expected.
|
peft/docs/source/_config.py
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# docstyle-ignore
|
| 2 |
+
INSTALL_CONTENT = """
|
| 3 |
+
# PEFT installation
|
| 4 |
+
! pip install peft accelerate transformers
|
| 5 |
+
# To install from source instead of the last release, comment the command above and uncomment the following one.
|
| 6 |
+
# ! pip install git+https://github.com/huggingface/peft.git
|
| 7 |
+
"""
|
peft/docs/source/_toctree.yml
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
- title: Get started
|
| 2 |
+
sections:
|
| 3 |
+
- local: index
|
| 4 |
+
title: 🤗 PEFT
|
| 5 |
+
- local: quicktour
|
| 6 |
+
title: Quicktour
|
| 7 |
+
- local: install
|
| 8 |
+
title: Installation
|
| 9 |
+
|
| 10 |
+
- title: Tutorial
|
| 11 |
+
sections:
|
| 12 |
+
- local: tutorial/peft_model_config
|
| 13 |
+
title: Configurations and models
|
| 14 |
+
- local: tutorial/peft_integrations
|
| 15 |
+
title: Integrations
|
| 16 |
+
|
| 17 |
+
- title: PEFT method guides
|
| 18 |
+
sections:
|
| 19 |
+
- local: task_guides/prompt_based_methods
|
| 20 |
+
title: Prompt-based methods
|
| 21 |
+
- local: task_guides/lora_based_methods
|
| 22 |
+
title: LoRA methods
|
| 23 |
+
- local: task_guides/ia3
|
| 24 |
+
title: IA3
|
| 25 |
+
|
| 26 |
+
- title: Developer guides
|
| 27 |
+
sections:
|
| 28 |
+
- local: developer_guides/model_merging
|
| 29 |
+
title: Model merging
|
| 30 |
+
- local: developer_guides/quantization
|
| 31 |
+
title: Quantization
|
| 32 |
+
- local: developer_guides/lora
|
| 33 |
+
title: LoRA
|
| 34 |
+
- local: developer_guides/custom_models
|
| 35 |
+
title: Custom models
|
| 36 |
+
- local: developer_guides/low_level_api
|
| 37 |
+
title: Adapter injection
|
| 38 |
+
- local: developer_guides/mixed_models
|
| 39 |
+
title: Mixed adapter types
|
| 40 |
+
- local: developer_guides/torch_compile
|
| 41 |
+
title: torch.compile
|
| 42 |
+
- local: developer_guides/contributing
|
| 43 |
+
title: Contribute to PEFT
|
| 44 |
+
- local: developer_guides/troubleshooting
|
| 45 |
+
title: Troubleshooting
|
| 46 |
+
- local: developer_guides/checkpoint
|
| 47 |
+
title: PEFT checkpoint format
|
| 48 |
+
|
| 49 |
+
- title: 🤗 Accelerate integrations
|
| 50 |
+
sections:
|
| 51 |
+
- local: accelerate/deepspeed
|
| 52 |
+
title: DeepSpeed
|
| 53 |
+
- local: accelerate/fsdp
|
| 54 |
+
title: Fully Sharded Data Parallel
|
| 55 |
+
|
| 56 |
+
- title: Conceptual guides
|
| 57 |
+
sections:
|
| 58 |
+
- local: conceptual_guides/adapter
|
| 59 |
+
title: Adapters
|
| 60 |
+
- local: conceptual_guides/prompting
|
| 61 |
+
title: Soft prompts
|
| 62 |
+
- local: conceptual_guides/ia3
|
| 63 |
+
title: IA3
|
| 64 |
+
- local: conceptual_guides/oft
|
| 65 |
+
title: OFT/BOFT
|
| 66 |
+
|
| 67 |
+
- sections:
|
| 68 |
+
- sections:
|
| 69 |
+
- local: package_reference/auto_class
|
| 70 |
+
title: AutoPeftModel
|
| 71 |
+
- local: package_reference/peft_model
|
| 72 |
+
title: PEFT model
|
| 73 |
+
- local: package_reference/peft_types
|
| 74 |
+
title: PEFT types
|
| 75 |
+
- local: package_reference/config
|
| 76 |
+
title: Configuration
|
| 77 |
+
- local: package_reference/tuners
|
| 78 |
+
title: Tuner
|
| 79 |
+
title: Main classes
|
| 80 |
+
- sections:
|
| 81 |
+
- local: package_reference/adalora
|
| 82 |
+
title: AdaLoRA
|
| 83 |
+
- local: package_reference/ia3
|
| 84 |
+
title: IA3
|
| 85 |
+
- local: package_reference/llama_adapter
|
| 86 |
+
title: Llama-Adapter
|
| 87 |
+
- local: package_reference/loha
|
| 88 |
+
title: LoHa
|
| 89 |
+
- local: package_reference/lokr
|
| 90 |
+
title: LoKr
|
| 91 |
+
- local: package_reference/lora
|
| 92 |
+
title: LoRA
|
| 93 |
+
- local: package_reference/xlora
|
| 94 |
+
title: X-LoRA
|
| 95 |
+
- local: package_reference/adapter_utils
|
| 96 |
+
title: LyCORIS
|
| 97 |
+
- local: package_reference/multitask_prompt_tuning
|
| 98 |
+
title: Multitask Prompt Tuning
|
| 99 |
+
- local: package_reference/oft
|
| 100 |
+
title: OFT
|
| 101 |
+
- local: package_reference/boft
|
| 102 |
+
title: BOFT
|
| 103 |
+
- local: package_reference/poly
|
| 104 |
+
title: Polytropon
|
| 105 |
+
- local: package_reference/p_tuning
|
| 106 |
+
title: P-tuning
|
| 107 |
+
- local: package_reference/prefix_tuning
|
| 108 |
+
title: Prefix tuning
|
| 109 |
+
- local: package_reference/prompt_tuning
|
| 110 |
+
title: Prompt tuning
|
| 111 |
+
- local: package_reference/layernorm_tuning
|
| 112 |
+
title: Layernorm tuning
|
| 113 |
+
- local: package_reference/vera
|
| 114 |
+
title: VeRA
|
| 115 |
+
- local: package_reference/fourierft
|
| 116 |
+
title: FourierFT
|
| 117 |
+
- local: package_reference/vblora
|
| 118 |
+
title: VB-LoRA
|
| 119 |
+
- local: package_reference/hra
|
| 120 |
+
title: HRA
|
| 121 |
+
- local: package_reference/cpt
|
| 122 |
+
title: CPT
|
| 123 |
+
- local: package_reference/bone
|
| 124 |
+
title: Bone
|
| 125 |
+
- local: package_reference/trainable_tokens
|
| 126 |
+
title: Trainable Tokens
|
| 127 |
+
- local: package_reference/randlora
|
| 128 |
+
title: RandLora
|
| 129 |
+
- local: package_reference/shira
|
| 130 |
+
title: SHiRA
|
| 131 |
+
- local: package_reference/c3a
|
| 132 |
+
title: C3A
|
| 133 |
+
- local: package_reference/miss
|
| 134 |
+
title: MiSS
|
| 135 |
+
- local: package_reference/road
|
| 136 |
+
title: RoAd
|
| 137 |
+
- local: package_reference/waveft
|
| 138 |
+
title: WaveFT
|
| 139 |
+
|
| 140 |
+
title: Adapters
|
| 141 |
+
- sections:
|
| 142 |
+
- local: package_reference/merge_utils
|
| 143 |
+
title: Model merge
|
| 144 |
+
- local: package_reference/helpers
|
| 145 |
+
title: Helpers
|
| 146 |
+
- local: package_reference/hotswap
|
| 147 |
+
title: Hotswapping adapters
|
| 148 |
+
- local: package_reference/functional
|
| 149 |
+
title: Functions for PEFT integration
|
| 150 |
+
title: Utilities
|
| 151 |
+
title: API reference
|
peft/docs/source/accelerate/deepspeed.md
ADDED
|
@@ -0,0 +1,449 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 2 |
+
rendered properly in your Markdown viewer.
|
| 3 |
+
-->
|
| 4 |
+
|
| 5 |
+
# DeepSpeed
|
| 6 |
+
|
| 7 |
+
[DeepSpeed](https://www.deepspeed.ai/) is a library designed for speed and scale for distributed training of large models with billions of parameters. At its core is the Zero Redundancy Optimizer (ZeRO) that shards optimizer states (ZeRO-1), gradients (ZeRO-2), and parameters (ZeRO-3) across data parallel processes. This drastically reduces memory usage, allowing you to scale your training to billion parameter models. To unlock even more memory efficiency, ZeRO-Offload reduces GPU compute and memory by leveraging CPU resources during optimization.
|
| 8 |
+
|
| 9 |
+
Both of these features are supported in 🤗 Accelerate, and you can use them with 🤗 PEFT.
|
| 10 |
+
|
| 11 |
+
## Compatibility with `bitsandbytes` quantization + LoRA
|
| 12 |
+
|
| 13 |
+
Below is a table that summarizes the compatibility between PEFT's LoRA, [`bitsandbytes`](https://github.com/TimDettmers/bitsandbytes) library and DeepSpeed Zero stages with respect to fine-tuning. DeepSpeed Zero-1 and 2 will have no effect at inference as stage 1 shards the optimizer states and stage 2 shards the optimizer states and gradients:
|
| 14 |
+
|
| 15 |
+
| DeepSpeed stage | Is compatible? |
|
| 16 |
+
|---|---|
|
| 17 |
+
| Zero-1 | 🟢 |
|
| 18 |
+
| Zero-2 | 🟢 |
|
| 19 |
+
| Zero-3 | 🟢 |
|
| 20 |
+
|
| 21 |
+
For DeepSpeed Stage 3 + QLoRA, please refer to the section [Use PEFT QLoRA and DeepSpeed with ZeRO3 for finetuning large models on multiple GPUs](#use-peft-qlora-and-deepspeed-with-zero3-for-finetuning-large-models-on-multiple-gpus) below.
|
| 22 |
+
|
| 23 |
+
For confirming these observations, we ran the SFT (Supervised Fine-tuning) [offical example scripts](https://github.com/huggingface/trl/tree/main/examples) of the [Transformers Reinforcement Learning (TRL) library](https://github.com/huggingface/trl) using QLoRA + PEFT and the accelerate configs available [here](https://github.com/huggingface/trl/tree/main/examples/accelerate_configs). We ran these experiments on a 2x NVIDIA T4 GPU.
|
| 24 |
+
|
| 25 |
+
# Use PEFT and DeepSpeed with ZeRO3 for finetuning large models on multiple devices and multiple nodes
|
| 26 |
+
|
| 27 |
+
This section of guide will help you learn how to use our DeepSpeed [training script](https://github.com/huggingface/peft/blob/main/examples/sft/train.py) for performing SFT. You'll configure the script to do SFT (supervised fine-tuning) of Llama-70B model with LoRA and ZeRO-3 on 8xH100 80GB GPUs on a single machine. You can configure it to scale to multiple machines by changing the accelerate config.
|
| 28 |
+
|
| 29 |
+
## Configuration
|
| 30 |
+
|
| 31 |
+
Start by running the following command to [create a DeepSpeed configuration file](https://huggingface.co/docs/accelerate/quicktour#launching-your-distributed-script) with 🤗 Accelerate. The `--config_file` flag allows you to save the configuration file to a specific location, otherwise it is saved as a `default_config.yaml` file in the 🤗 Accelerate cache.
|
| 32 |
+
|
| 33 |
+
The configuration file is used to set the default options when you launch the training script.
|
| 34 |
+
|
| 35 |
+
```bash
|
| 36 |
+
accelerate config --config_file deepspeed_config.yaml
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
You'll be asked a few questions about your setup, and configure the following arguments. In this example, you'll use ZeRO-3 so make sure you pick those options.
|
| 40 |
+
|
| 41 |
+
```bash
|
| 42 |
+
`zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning
|
| 43 |
+
`gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them. Pass the same value as you would pass via cmd argument else you will encounter mismatch error.
|
| 44 |
+
`gradient_clipping`: Enable gradient clipping with value. Don't set this as you will be passing it via cmd arguments.
|
| 45 |
+
`offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2. Set this as `none` as don't want to enable offloading.
|
| 46 |
+
`offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3. Set this as `none` as don't want to enable offloading.
|
| 47 |
+
`zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3. Set this to `True`.
|
| 48 |
+
`zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3. Set this to `True`.
|
| 49 |
+
`mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training. Set this to `True`.
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
Once this is done, the corresponding config should look like below and you can find it in config folder at [deepspeed_config.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/deepspeed_config.yaml):
|
| 53 |
+
|
| 54 |
+
```yml
|
| 55 |
+
compute_environment: LOCAL_MACHINE
|
| 56 |
+
debug: false
|
| 57 |
+
deepspeed_config:
|
| 58 |
+
deepspeed_multinode_launcher: standard
|
| 59 |
+
gradient_accumulation_steps: 4
|
| 60 |
+
offload_optimizer_device: none
|
| 61 |
+
offload_param_device: none
|
| 62 |
+
zero3_init_flag: true
|
| 63 |
+
zero3_save_16bit_model: true
|
| 64 |
+
zero_stage: 3
|
| 65 |
+
distributed_type: DEEPSPEED
|
| 66 |
+
downcast_bf16: 'no'
|
| 67 |
+
machine_rank: 0
|
| 68 |
+
main_training_function: main
|
| 69 |
+
mixed_precision: bf16
|
| 70 |
+
num_machines: 1
|
| 71 |
+
num_processes: 8
|
| 72 |
+
rdzv_backend: static
|
| 73 |
+
same_network: true
|
| 74 |
+
tpu_env: []
|
| 75 |
+
tpu_use_cluster: false
|
| 76 |
+
tpu_use_sudo: false
|
| 77 |
+
use_cpu: false
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
## Launch command
|
| 81 |
+
|
| 82 |
+
The launch command is available at [run_peft_deepspeed.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_deepspeed.sh) and it is also shown below:
|
| 83 |
+
```bash
|
| 84 |
+
accelerate launch --config_file "configs/deepspeed_config.yaml" train.py \
|
| 85 |
+
--seed 100 \
|
| 86 |
+
--model_name_or_path "meta-llama/Llama-2-70b-hf" \
|
| 87 |
+
--dataset_name "smangrul/ultrachat-10k-chatml" \
|
| 88 |
+
--chat_template_format "chatml" \
|
| 89 |
+
--add_special_tokens False \
|
| 90 |
+
--append_concat_token False \
|
| 91 |
+
--splits "train,test" \
|
| 92 |
+
--max_seq_len 2048 \
|
| 93 |
+
--num_train_epochs 1 \
|
| 94 |
+
--logging_steps 5 \
|
| 95 |
+
--log_level "info" \
|
| 96 |
+
--logging_strategy "steps" \
|
| 97 |
+
--eval_strategy "epoch" \
|
| 98 |
+
--save_strategy "epoch" \
|
| 99 |
+
--push_to_hub \
|
| 100 |
+
--hub_private_repo True \
|
| 101 |
+
--hub_strategy "every_save" \
|
| 102 |
+
--bf16 True \
|
| 103 |
+
--packing True \
|
| 104 |
+
--learning_rate 1e-4 \
|
| 105 |
+
--lr_scheduler_type "cosine" \
|
| 106 |
+
--weight_decay 1e-4 \
|
| 107 |
+
--warmup_ratio 0.0 \
|
| 108 |
+
--max_grad_norm 1.0 \
|
| 109 |
+
--output_dir "llama-sft-lora-deepspeed" \
|
| 110 |
+
--per_device_train_batch_size 8 \
|
| 111 |
+
--per_device_eval_batch_size 8 \
|
| 112 |
+
--gradient_accumulation_steps 4 \
|
| 113 |
+
--gradient_checkpointing True \
|
| 114 |
+
--use_reentrant False \
|
| 115 |
+
--dataset_text_field "content" \
|
| 116 |
+
--use_flash_attn True \
|
| 117 |
+
--use_peft_lora True \
|
| 118 |
+
--lora_r 8 \
|
| 119 |
+
--lora_alpha 16 \
|
| 120 |
+
--lora_dropout 0.1 \
|
| 121 |
+
--lora_target_modules "all-linear" \
|
| 122 |
+
--use_4bit_quantization False
|
| 123 |
+
```
|
| 124 |
+
|
| 125 |
+
Notice that we are using LoRA with rank=8, alpha=16 and targeting all linear layers. We are passing the deepspeed config file and finetuning 70B Llama model on a subset of the ultrachat dataset.
|
| 126 |
+
|
| 127 |
+
## The important parts
|
| 128 |
+
|
| 129 |
+
Let's dive a little deeper into the script so you can see what's going on, and understand how it works.
|
| 130 |
+
|
| 131 |
+
The first thing to know is that the script uses DeepSpeed for distributed training as the DeepSpeed config has been passed. The [`~trl.SFTTrainer`] class handles all the heavy lifting of creating the PEFT model using the peft config that is passed. After that, when you call `trainer.train()`, [`~trl.SFTTrainer`] internally uses 🤗 Accelerate to prepare the model, optimizer and trainer using the DeepSpeed config to create DeepSpeed engine which is then trained. The main code snippet is below:
|
| 132 |
+
|
| 133 |
+
```python
|
| 134 |
+
# trainer
|
| 135 |
+
trainer = SFTTrainer(
|
| 136 |
+
model=model,
|
| 137 |
+
processing_class=tokenizer,
|
| 138 |
+
args=training_args,
|
| 139 |
+
train_dataset=train_dataset,
|
| 140 |
+
eval_dataset=eval_dataset,
|
| 141 |
+
peft_config=peft_config,
|
| 142 |
+
)
|
| 143 |
+
trainer.accelerator.print(f"{trainer.model}")
|
| 144 |
+
|
| 145 |
+
# train
|
| 146 |
+
checkpoint = None
|
| 147 |
+
if training_args.resume_from_checkpoint is not None:
|
| 148 |
+
checkpoint = training_args.resume_from_checkpoint
|
| 149 |
+
trainer.train(resume_from_checkpoint=checkpoint)
|
| 150 |
+
|
| 151 |
+
# saving final model
|
| 152 |
+
trainer.save_model()
|
| 153 |
+
```
|
| 154 |
+
|
| 155 |
+
## Memory usage
|
| 156 |
+
|
| 157 |
+
In the above example, the memory consumed per GPU is 64 GB (80%) as seen in the screenshot below:
|
| 158 |
+
|
| 159 |
+
<div class="flex justify-center">
|
| 160 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/peft_deepspeed_mem_usage.png"/>
|
| 161 |
+
</div>
|
| 162 |
+
<small>GPU memory usage for the training run</small>
|
| 163 |
+
|
| 164 |
+
## More resources
|
| 165 |
+
You can also refer this blog post [Falcon 180B Finetuning using 🤗 PEFT and DeepSpeed](https://medium.com/@sourabmangrulkar/falcon-180b-finetuning-using-peft-and-deepspeed-b92643091d99) on how to finetune 180B Falcon model on 16 A100 GPUs on 2 machines.
|
| 166 |
+
|
| 167 |
+
|
| 168 |
+
# Use PEFT QLoRA and DeepSpeed with ZeRO3 for finetuning large models on multiple GPUs
|
| 169 |
+
|
| 170 |
+
In this section, we will look at how to use QLoRA and DeepSpeed Stage-3 for finetuning 70B llama model on 2X40GB GPUs.
|
| 171 |
+
For this, we first need `bitsandbytes>=0.43.3`, `accelerate>=1.0.1`, `transformers>4.44.2`, `trl>0.11.4` and `peft>0.13.0`. We need to set `zero3_init_flag` to true when using Accelerate config. Below is the config which can be found at [deepspeed_config_z3_qlora.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/deepspeed_config_z3_qlora.yaml):
|
| 172 |
+
|
| 173 |
+
```yml
|
| 174 |
+
compute_environment: LOCAL_MACHINE
|
| 175 |
+
debug: false
|
| 176 |
+
deepspeed_config:
|
| 177 |
+
deepspeed_multinode_launcher: standard
|
| 178 |
+
offload_optimizer_device: none
|
| 179 |
+
offload_param_device: none
|
| 180 |
+
zero3_init_flag: true
|
| 181 |
+
zero3_save_16bit_model: true
|
| 182 |
+
zero_stage: 3
|
| 183 |
+
distributed_type: DEEPSPEED
|
| 184 |
+
downcast_bf16: 'no'
|
| 185 |
+
machine_rank: 0
|
| 186 |
+
main_training_function: main
|
| 187 |
+
mixed_precision: bf16
|
| 188 |
+
num_machines: 1
|
| 189 |
+
num_processes: 2
|
| 190 |
+
rdzv_backend: static
|
| 191 |
+
same_network: true
|
| 192 |
+
tpu_env: []
|
| 193 |
+
tpu_use_cluster: false
|
| 194 |
+
tpu_use_sudo: false
|
| 195 |
+
use_cpu: false
|
| 196 |
+
```
|
| 197 |
+
|
| 198 |
+
Launch command is given below which is available at [run_peft_qlora_deepspeed_stage3.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_qlora_deepspeed_stage3.sh):
|
| 199 |
+
```
|
| 200 |
+
accelerate launch --config_file "configs/deepspeed_config_z3_qlora.yaml" train.py \
|
| 201 |
+
--seed 100 \
|
| 202 |
+
--model_name_or_path "meta-llama/Llama-2-70b-hf" \
|
| 203 |
+
--dataset_name "smangrul/ultrachat-10k-chatml" \
|
| 204 |
+
--chat_template_format "chatml" \
|
| 205 |
+
--add_special_tokens False \
|
| 206 |
+
--append_concat_token False \
|
| 207 |
+
--splits "train,test" \
|
| 208 |
+
--max_seq_len 2048 \
|
| 209 |
+
--num_train_epochs 1 \
|
| 210 |
+
--logging_steps 5 \
|
| 211 |
+
--log_level "info" \
|
| 212 |
+
--logging_strategy "steps" \
|
| 213 |
+
--eval_strategy "epoch" \
|
| 214 |
+
--save_strategy "epoch" \
|
| 215 |
+
--push_to_hub \
|
| 216 |
+
--hub_private_repo True \
|
| 217 |
+
--hub_strategy "every_save" \
|
| 218 |
+
--bf16 True \
|
| 219 |
+
--packing True \
|
| 220 |
+
--learning_rate 1e-4 \
|
| 221 |
+
--lr_scheduler_type "cosine" \
|
| 222 |
+
--weight_decay 1e-4 \
|
| 223 |
+
--warmup_ratio 0.0 \
|
| 224 |
+
--max_grad_norm 1.0 \
|
| 225 |
+
--output_dir "llama-sft-qlora-dsz3" \
|
| 226 |
+
--per_device_train_batch_size 2 \
|
| 227 |
+
--per_device_eval_batch_size 2 \
|
| 228 |
+
--gradient_accumulation_steps 2 \
|
| 229 |
+
--gradient_checkpointing True \
|
| 230 |
+
--use_reentrant True \
|
| 231 |
+
--dataset_text_field "content" \
|
| 232 |
+
--use_flash_attn True \
|
| 233 |
+
--use_peft_lora True \
|
| 234 |
+
--lora_r 8 \
|
| 235 |
+
--lora_alpha 16 \
|
| 236 |
+
--lora_dropout 0.1 \
|
| 237 |
+
--lora_target_modules "all-linear" \
|
| 238 |
+
--use_4bit_quantization True \
|
| 239 |
+
--use_nested_quant True \
|
| 240 |
+
--bnb_4bit_compute_dtype "bfloat16" \
|
| 241 |
+
--bnb_4bit_quant_storage_dtype "bfloat16"
|
| 242 |
+
```
|
| 243 |
+
|
| 244 |
+
Notice the new argument being passed `bnb_4bit_quant_storage_dtype` which denotes the data type for packing the 4-bit parameters. For example, when it is set to `bfloat16`, **32/4 = 8** 4-bit params are packed together post quantization.
|
| 245 |
+
|
| 246 |
+
In terms of training code, the important code changes are:
|
| 247 |
+
|
| 248 |
+
```diff
|
| 249 |
+
...
|
| 250 |
+
|
| 251 |
+
bnb_config = BitsAndBytesConfig(
|
| 252 |
+
load_in_4bit=args.use_4bit_quantization,
|
| 253 |
+
bnb_4bit_quant_type=args.bnb_4bit_quant_type,
|
| 254 |
+
bnb_4bit_compute_dtype=compute_dtype,
|
| 255 |
+
bnb_4bit_use_double_quant=args.use_nested_quant,
|
| 256 |
+
+ bnb_4bit_quant_storage=quant_storage_dtype,
|
| 257 |
+
)
|
| 258 |
+
|
| 259 |
+
...
|
| 260 |
+
|
| 261 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 262 |
+
args.model_name_or_path,
|
| 263 |
+
quantization_config=bnb_config,
|
| 264 |
+
trust_remote_code=True,
|
| 265 |
+
attn_implementation="flash_attention_2" if args.use_flash_attn else "eager",
|
| 266 |
+
+ torch_dtype=quant_storage_dtype or torch.float32,
|
| 267 |
+
)
|
| 268 |
+
```
|
| 269 |
+
|
| 270 |
+
Notice that `torch_dtype` for `AutoModelForCausalLM` is same as the `bnb_4bit_quant_storage` data type. That's it. Everything else is handled by Trainer and TRL.
|
| 271 |
+
|
| 272 |
+
## Memory usage
|
| 273 |
+
|
| 274 |
+
In the above example, the memory consumed per GPU is **36.6 GB**. Therefore, what took 8X80GB GPUs with DeepSpeed Stage 3+LoRA and a couple of 80GB GPUs with DDP+QLoRA now requires 2X40GB GPUs. This makes finetuning of large models more accessible.
|
| 275 |
+
|
| 276 |
+
# Use PEFT and DeepSpeed with ZeRO3 and CPU Offloading for finetuning large models on a single GPU
|
| 277 |
+
This section of guide will help you learn how to use our DeepSpeed [training script](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py). You'll configure the script to train a large model for conditional generation with ZeRO-3 and CPU Offload.
|
| 278 |
+
|
| 279 |
+
> [!TIP]
|
| 280 |
+
> 💡 To help you get started, check out our example training scripts for [causal language modeling](https://github.com/huggingface/peft/blob/main/examples/causal_language_modeling/peft_lora_clm_accelerate_ds_zero3_offload.py) and [conditional generation](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py). You can adapt these scripts for your own applications or even use them out of the box if your task is similar to the one in the scripts.
|
| 281 |
+
|
| 282 |
+
## Configuration
|
| 283 |
+
|
| 284 |
+
Start by running the following command to [create a DeepSpeed configuration file](https://huggingface.co/docs/accelerate/quicktour#launching-your-distributed-script) with 🤗 Accelerate. The `--config_file` flag allows you to save the configuration file to a specific location, otherwise it is saved as a `default_config.yaml` file in the 🤗 Accelerate cache.
|
| 285 |
+
|
| 286 |
+
The configuration file is used to set the default options when you launch the training script.
|
| 287 |
+
|
| 288 |
+
```bash
|
| 289 |
+
accelerate config --config_file ds_zero3_cpu.yaml
|
| 290 |
+
```
|
| 291 |
+
|
| 292 |
+
You'll be asked a few questions about your setup, and configure the following arguments. In this example, you'll use ZeRO-3 along with CPU-Offload so make sure you pick those options.
|
| 293 |
+
|
| 294 |
+
```bash
|
| 295 |
+
`zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning
|
| 296 |
+
`gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them.
|
| 297 |
+
`gradient_clipping`: Enable gradient clipping with value.
|
| 298 |
+
`offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2.
|
| 299 |
+
`offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3.
|
| 300 |
+
`zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3.
|
| 301 |
+
`zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3.
|
| 302 |
+
`mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training.
|
| 303 |
+
```
|
| 304 |
+
|
| 305 |
+
An example [configuration file](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/accelerate_ds_zero3_cpu_offload_config.yaml) might look like the following. The most important thing to notice is that `zero_stage` is set to `3`, and `offload_optimizer_device` and `offload_param_device` are set to the `cpu`.
|
| 306 |
+
|
| 307 |
+
```yml
|
| 308 |
+
compute_environment: LOCAL_MACHINE
|
| 309 |
+
deepspeed_config:
|
| 310 |
+
gradient_accumulation_steps: 1
|
| 311 |
+
gradient_clipping: 1.0
|
| 312 |
+
offload_optimizer_device: cpu
|
| 313 |
+
offload_param_device: cpu
|
| 314 |
+
zero3_init_flag: true
|
| 315 |
+
zero3_save_16bit_model: true
|
| 316 |
+
zero_stage: 3
|
| 317 |
+
distributed_type: DEEPSPEED
|
| 318 |
+
downcast_bf16: 'no'
|
| 319 |
+
dynamo_backend: 'NO'
|
| 320 |
+
fsdp_config: {}
|
| 321 |
+
machine_rank: 0
|
| 322 |
+
main_training_function: main
|
| 323 |
+
megatron_lm_config: {}
|
| 324 |
+
mixed_precision: 'no'
|
| 325 |
+
num_machines: 1
|
| 326 |
+
num_processes: 1
|
| 327 |
+
rdzv_backend: static
|
| 328 |
+
same_network: true
|
| 329 |
+
use_cpu: false
|
| 330 |
+
```
|
| 331 |
+
|
| 332 |
+
## The important parts
|
| 333 |
+
|
| 334 |
+
Let's dive a little deeper into the script so you can see what's going on, and understand how it works.
|
| 335 |
+
|
| 336 |
+
Within the [`main`](https://github.com/huggingface/peft/blob/2822398fbe896f25d4dac5e468624dc5fd65a51b/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py#L103) function, the script creates an [`~accelerate.Accelerator`] class to initialize all the necessary requirements for distributed training.
|
| 337 |
+
|
| 338 |
+
> [!TIP]
|
| 339 |
+
> 💡 Feel free to change the model and dataset inside the `main` function. If your dataset format is different from the one in the script, you may also need to write your own preprocessing function.
|
| 340 |
+
|
| 341 |
+
The script also creates a configuration for the 🤗 PEFT method you're using, which in this case, is LoRA. The [`LoraConfig`] specifies the task type and important parameters such as the dimension of the low-rank matrices, the matrices scaling factor, and the dropout probability of the LoRA layers. If you want to use a different 🤗 PEFT method, make sure you replace `LoraConfig` with the appropriate [class](../package_reference/tuners).
|
| 342 |
+
|
| 343 |
+
```diff
|
| 344 |
+
def main():
|
| 345 |
+
+ accelerator = Accelerator()
|
| 346 |
+
model_name_or_path = "facebook/bart-large"
|
| 347 |
+
dataset_name = "twitter_complaints"
|
| 348 |
+
+ peft_config = LoraConfig(
|
| 349 |
+
task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1
|
| 350 |
+
)
|
| 351 |
+
```
|
| 352 |
+
|
| 353 |
+
Throughout the script, you'll see the [`~accelerate.Accelerator.main_process_first`] and [`~accelerate.Accelerator.wait_for_everyone`] functions which help control and synchronize when processes are executed.
|
| 354 |
+
|
| 355 |
+
The [`get_peft_model`] function takes a base model and the [`peft_config`] you prepared earlier to create a [`PeftModel`]:
|
| 356 |
+
|
| 357 |
+
```diff
|
| 358 |
+
model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
|
| 359 |
+
+ model = get_peft_model(model, peft_config)
|
| 360 |
+
```
|
| 361 |
+
|
| 362 |
+
Pass all the relevant training objects to 🤗 Accelerate's [`~accelerate.Accelerator.prepare`] which makes sure everything is ready for training:
|
| 363 |
+
|
| 364 |
+
```py
|
| 365 |
+
model, train_dataloader, eval_dataloader, test_dataloader, optimizer, lr_scheduler = accelerator.prepare(
|
| 366 |
+
model, train_dataloader, eval_dataloader, test_dataloader, optimizer, lr_scheduler
|
| 367 |
+
)
|
| 368 |
+
```
|
| 369 |
+
|
| 370 |
+
The next bit of code checks whether the DeepSpeed plugin is used in the `Accelerator`, and if the plugin exists, then we check if we are using ZeRO-3. This conditional flag is used when calling `generate` function call during inference for syncing GPUs when the model parameters are sharded:
|
| 371 |
+
|
| 372 |
+
```py
|
| 373 |
+
is_ds_zero_3 = False
|
| 374 |
+
if getattr(accelerator.state, "deepspeed_plugin", None):
|
| 375 |
+
is_ds_zero_3 = accelerator.state.deepspeed_plugin.zero_stage == 3
|
| 376 |
+
```
|
| 377 |
+
|
| 378 |
+
Inside the training loop, the usual `loss.backward()` is replaced by 🤗 Accelerate's [`~accelerate.Accelerator.backward`] which uses the correct `backward()` method based on your configuration:
|
| 379 |
+
|
| 380 |
+
```diff
|
| 381 |
+
for epoch in range(num_epochs):
|
| 382 |
+
with TorchTracemalloc() as tracemalloc:
|
| 383 |
+
model.train()
|
| 384 |
+
total_loss = 0
|
| 385 |
+
for step, batch in enumerate(tqdm(train_dataloader)):
|
| 386 |
+
outputs = model(**batch)
|
| 387 |
+
loss = outputs.loss
|
| 388 |
+
total_loss += loss.detach().float()
|
| 389 |
+
+ accelerator.backward(loss)
|
| 390 |
+
optimizer.step()
|
| 391 |
+
lr_scheduler.step()
|
| 392 |
+
optimizer.zero_grad()
|
| 393 |
+
```
|
| 394 |
+
|
| 395 |
+
That is all! The rest of the script handles the training loop, evaluation, and even pushes it to the Hub for you.
|
| 396 |
+
|
| 397 |
+
## Train
|
| 398 |
+
|
| 399 |
+
Run the following command to launch the training script. Earlier, you saved the configuration file to `ds_zero3_cpu.yaml`, so you'll need to pass the path to the launcher with the `--config_file` argument like this:
|
| 400 |
+
|
| 401 |
+
```bash
|
| 402 |
+
accelerate launch --config_file ds_zero3_cpu.yaml examples/peft_lora_seq2seq_accelerate_ds_zero3_offload.py
|
| 403 |
+
```
|
| 404 |
+
|
| 405 |
+
You'll see some output logs that track memory usage during training, and once it's completed, the script returns the accuracy and compares the predictions to the labels:
|
| 406 |
+
|
| 407 |
+
```bash
|
| 408 |
+
GPU Memory before entering the train : 1916
|
| 409 |
+
GPU Memory consumed at the end of the train (end-begin): 66
|
| 410 |
+
GPU Peak Memory consumed during the train (max-begin): 7488
|
| 411 |
+
GPU Total Peak Memory consumed during the train (max): 9404
|
| 412 |
+
CPU Memory before entering the train : 19411
|
| 413 |
+
CPU Memory consumed at the end of the train (end-begin): 0
|
| 414 |
+
CPU Peak Memory consumed during the train (max-begin): 0
|
| 415 |
+
CPU Total Peak Memory consumed during the train (max): 19411
|
| 416 |
+
epoch=4: train_ppl=tensor(1.0705, device='cuda:0') train_epoch_loss=tensor(0.0681, device='cuda:0')
|
| 417 |
+
100%|████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:27<00:00, 3.92s/it]
|
| 418 |
+
GPU Memory before entering the eval : 1982
|
| 419 |
+
GPU Memory consumed at the end of the eval (end-begin): -66
|
| 420 |
+
GPU Peak Memory consumed during the eval (max-begin): 672
|
| 421 |
+
GPU Total Peak Memory consumed during the eval (max): 2654
|
| 422 |
+
CPU Memory before entering the eval : 19411
|
| 423 |
+
CPU Memory consumed at the end of the eval (end-begin): 0
|
| 424 |
+
CPU Peak Memory consumed during the eval (max-begin): 0
|
| 425 |
+
CPU Total Peak Memory consumed during the eval (max): 19411
|
| 426 |
+
accuracy=100.0
|
| 427 |
+
eval_preds[:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint']
|
| 428 |
+
dataset['train'][label_column][:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint']
|
| 429 |
+
```
|
| 430 |
+
|
| 431 |
+
# Caveats
|
| 432 |
+
1. Merging when using PEFT and DeepSpeed is currently unsupported and will raise error.
|
| 433 |
+
2. When using CPU offloading, the major gains from using PEFT to shrink the optimizer states and gradients to that of the adapter weights would be realized on CPU RAM and there won't be savings with respect to GPU memory.
|
| 434 |
+
3. DeepSpeed Stage 3 and qlora when used with CPU offloading leads to more GPU memory usage when compared to disabling CPU offloading.
|
| 435 |
+
|
| 436 |
+
> [!TIP]
|
| 437 |
+
> 💡 When you have code that requires merging (and unmerging) of weights, try to manually collect the parameters with DeepSpeed Zero-3 beforehand:
|
| 438 |
+
>
|
| 439 |
+
> ```python
|
| 440 |
+
> import deepspeed
|
| 441 |
+
>
|
| 442 |
+
> is_ds_zero_3 = ... # check if Zero-3
|
| 443 |
+
>
|
| 444 |
+
> with deepspeed.zero.GatheredParameters(list(model.parameters()), enabled= is_ds_zero_3):
|
| 445 |
+
> model.merge_adapter()
|
| 446 |
+
> # do whatever is needed, then unmerge in the same context if unmerging is required
|
| 447 |
+
> ...
|
| 448 |
+
> model.unmerge_adapter()
|
| 449 |
+
> ```
|
peft/docs/source/accelerate/fsdp.md
ADDED
|
@@ -0,0 +1,285 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 2 |
+
rendered properly in your Markdown viewer.
|
| 3 |
+
-->
|
| 4 |
+
|
| 5 |
+
# Fully Sharded Data Parallel
|
| 6 |
+
|
| 7 |
+
[Fully sharded data parallel](https://pytorch.org/docs/stable/fsdp.html) (FSDP) is developed for distributed training of large pretrained models up to 1T parameters. FSDP achieves this by sharding the model parameters, gradients, and optimizer states across data parallel processes and it can also offload sharded model parameters to a CPU. The memory efficiency afforded by FSDP allows you to scale training to larger batch or model sizes.
|
| 8 |
+
|
| 9 |
+
Both of these features are supported in 🤗 Accelerate, and you can use them with 🤗 PEFT.
|
| 10 |
+
|
| 11 |
+
# Use PEFT and FSDP
|
| 12 |
+
This section of guide will help you learn how to use our DeepSpeed [training script](https://github.com/huggingface/peft/blob/main/examples/sft/train.py) for performing SFT. You'll configure the script to do SFT (supervised fine-tuning) of Llama-70B model with LoRA and FSDP on 8xH100 80GB GPUs on a single machine. You can configure it to scale to multiple machines by changing the accelerate config.
|
| 13 |
+
|
| 14 |
+
## Configuration
|
| 15 |
+
|
| 16 |
+
Start by running the following command to [create a FSDP configuration file](https://huggingface.co/docs/accelerate/quicktour#launching-your-distributed-script) with 🤗 Accelerate. The `--config_file` flag allows you to save the configuration file to a specific location, otherwise it is saved as a `default_config.yaml` file in the 🤗 Accelerate cache.
|
| 17 |
+
|
| 18 |
+
The configuration file is used to set the default options when you launch the training script.
|
| 19 |
+
|
| 20 |
+
```bash
|
| 21 |
+
accelerate config --config_file fsdp_config.yaml
|
| 22 |
+
```
|
| 23 |
+
|
| 24 |
+
You'll be asked a few questions about your setup, and configure the following arguments. In this example, you'll answer the questionnaire as shown in the image below.
|
| 25 |
+
<div class="flex justify-center">
|
| 26 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/fsdp-peft-config.png"/>
|
| 27 |
+
</div>
|
| 28 |
+
<small>Creating Accelerate's config to use FSDP</small>
|
| 29 |
+
|
| 30 |
+
Once this is done, the corresponding config should look like below and you can find it in config folder at [fsdp_config.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/fsdp_config.yaml):
|
| 31 |
+
|
| 32 |
+
```yml
|
| 33 |
+
compute_environment: LOCAL_MACHINE
|
| 34 |
+
debug: false
|
| 35 |
+
distributed_type: FSDP
|
| 36 |
+
downcast_bf16: 'no'
|
| 37 |
+
fsdp_config:
|
| 38 |
+
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
|
| 39 |
+
fsdp_backward_prefetch: BACKWARD_PRE
|
| 40 |
+
fsdp_cpu_ram_efficient_loading: true
|
| 41 |
+
fsdp_forward_prefetch: false
|
| 42 |
+
fsdp_offload_params: false
|
| 43 |
+
fsdp_sharding_strategy: FULL_SHARD
|
| 44 |
+
fsdp_state_dict_type: SHARDED_STATE_DICT
|
| 45 |
+
fsdp_sync_module_states: true
|
| 46 |
+
fsdp_use_orig_params: false
|
| 47 |
+
machine_rank: 0
|
| 48 |
+
main_training_function: main
|
| 49 |
+
mixed_precision: bf16
|
| 50 |
+
num_machines: 1
|
| 51 |
+
num_processes: 8
|
| 52 |
+
rdzv_backend: static
|
| 53 |
+
same_network: true
|
| 54 |
+
tpu_env: []
|
| 55 |
+
tpu_use_cluster: false
|
| 56 |
+
tpu_use_sudo: false
|
| 57 |
+
use_cpu: false
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
## Launch command
|
| 61 |
+
|
| 62 |
+
The launch command is available at [run_peft_fsdp.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_fsdp.sh) and it is also shown below:
|
| 63 |
+
```bash
|
| 64 |
+
accelerate launch --config_file "configs/fsdp_config.yaml" train.py \
|
| 65 |
+
--seed 100 \
|
| 66 |
+
--model_name_or_path "meta-llama/Llama-2-70b-hf" \
|
| 67 |
+
--dataset_name "smangrul/ultrachat-10k-chatml" \
|
| 68 |
+
--chat_template_format "chatml" \
|
| 69 |
+
--add_special_tokens False \
|
| 70 |
+
--append_concat_token False \
|
| 71 |
+
--splits "train,test" \
|
| 72 |
+
--max_seq_len 2048 \
|
| 73 |
+
--num_train_epochs 1 \
|
| 74 |
+
--logging_steps 5 \
|
| 75 |
+
--log_level "info" \
|
| 76 |
+
--logging_strategy "steps" \
|
| 77 |
+
--eval_strategy "epoch" \
|
| 78 |
+
--save_strategy "epoch" \
|
| 79 |
+
--push_to_hub \
|
| 80 |
+
--hub_private_repo True \
|
| 81 |
+
--hub_strategy "every_save" \
|
| 82 |
+
--bf16 True \
|
| 83 |
+
--packing True \
|
| 84 |
+
--learning_rate 1e-4 \
|
| 85 |
+
--lr_scheduler_type "cosine" \
|
| 86 |
+
--weight_decay 1e-4 \
|
| 87 |
+
--warmup_ratio 0.0 \
|
| 88 |
+
--max_grad_norm 1.0 \
|
| 89 |
+
--output_dir "llama-sft-lora-fsdp" \
|
| 90 |
+
--per_device_train_batch_size 8 \
|
| 91 |
+
--per_device_eval_batch_size 8 \
|
| 92 |
+
--gradient_accumulation_steps 4 \
|
| 93 |
+
--gradient_checkpointing True \
|
| 94 |
+
--use_reentrant False \
|
| 95 |
+
--dataset_text_field "content" \
|
| 96 |
+
--use_flash_attn True \
|
| 97 |
+
--use_peft_lora True \
|
| 98 |
+
--lora_r 8 \
|
| 99 |
+
--lora_alpha 16 \
|
| 100 |
+
--lora_dropout 0.1 \
|
| 101 |
+
--lora_target_modules "all-linear" \
|
| 102 |
+
--use_4bit_quantization False
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
Notice that we are using LoRA with rank=8, alpha=16 and targeting all linear layers. We are passing the FSDP config file and finetuning the 70B Llama model on a subset of the [ultrachat dataset](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k).
|
| 106 |
+
|
| 107 |
+
## The important parts
|
| 108 |
+
|
| 109 |
+
Let's dive a little deeper into the script so you can see what's going on, and understand how it works.
|
| 110 |
+
|
| 111 |
+
The first thing to know is that the script uses FSDP for distributed training as the FSDP config has been passed. The [`~trl.SFTTrainer`] class handles all the heavy lifting of creating PEFT model using the peft config that is passed. After that when you call `trainer.train()`, Trainer internally uses 🤗 Accelerate to prepare model, optimizer and trainer using the FSDP config to create FSDP wrapped model which is then trained. The main code snippet is below:
|
| 112 |
+
|
| 113 |
+
```python
|
| 114 |
+
# trainer
|
| 115 |
+
trainer = SFTTrainer(
|
| 116 |
+
model=model,
|
| 117 |
+
processing_class=tokenizer,
|
| 118 |
+
args=training_args,
|
| 119 |
+
train_dataset=train_dataset,
|
| 120 |
+
eval_dataset=eval_dataset,
|
| 121 |
+
peft_config=peft_config,
|
| 122 |
+
)
|
| 123 |
+
trainer.accelerator.print(f"{trainer.model}")
|
| 124 |
+
if model_args.use_peft_lora:
|
| 125 |
+
# handle PEFT+FSDP case
|
| 126 |
+
trainer.model.print_trainable_parameters()
|
| 127 |
+
if getattr(trainer.accelerator.state, "fsdp_plugin", None):
|
| 128 |
+
from peft.utils.other import fsdp_auto_wrap_policy
|
| 129 |
+
|
| 130 |
+
fsdp_plugin = trainer.accelerator.state.fsdp_plugin
|
| 131 |
+
fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(trainer.model)
|
| 132 |
+
|
| 133 |
+
# train
|
| 134 |
+
checkpoint = None
|
| 135 |
+
if training_args.resume_from_checkpoint is not None:
|
| 136 |
+
checkpoint = training_args.resume_from_checkpoint
|
| 137 |
+
trainer.train(resume_from_checkpoint=checkpoint)
|
| 138 |
+
|
| 139 |
+
# saving final model
|
| 140 |
+
if trainer.is_fsdp_enabled:
|
| 141 |
+
trainer.accelerator.state.fsdp_plugin.set_state_dict_type("FULL_STATE_DICT")
|
| 142 |
+
trainer.save_model()
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
|
| 146 |
+
Here, one main thing to note currently when using FSDP with PEFT is that `use_orig_params` needs to be `False` to realize GPU memory savings. Due to `use_orig_params=False`, the auto wrap policy for FSDP needs to change so that trainable and non-trainable parameters are wrapped separately. This is done by the code snippt below which uses the util function `fsdp_auto_wrap_policy` from PEFT:
|
| 147 |
+
|
| 148 |
+
```
|
| 149 |
+
if getattr(trainer.accelerator.state, "fsdp_plugin", None):
|
| 150 |
+
from peft.utils.other import fsdp_auto_wrap_policy
|
| 151 |
+
|
| 152 |
+
fsdp_plugin = trainer.accelerator.state.fsdp_plugin
|
| 153 |
+
fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(trainer.model)
|
| 154 |
+
```
|
| 155 |
+
|
| 156 |
+
## Memory usage
|
| 157 |
+
|
| 158 |
+
In the above example, the memory consumed per GPU is 72-80 GB (90-98%) as seen in the screenshot below. The slight increase in GPU memory at the end is when saving the model using `FULL_STATE_DICT` state dict type instead of the `SHARDED_STATE_DICT` so that the model has adapter weights that can be loaded normally with `from_pretrained` method during inference:
|
| 159 |
+
|
| 160 |
+
<div class="flex justify-center">
|
| 161 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/peft_fsdp_mem_usage.png"/>
|
| 162 |
+
</div>
|
| 163 |
+
<small>GPU memory usage for the training run</small>
|
| 164 |
+
|
| 165 |
+
# Use PEFT QLoRA and FSDP for finetuning large models on multiple GPUs
|
| 166 |
+
|
| 167 |
+
In this section, we will look at how to use QLoRA and FSDP for finetuning 70B llama model on 2X24GB GPUs. [Answer.AI](https://www.answer.ai/) in collaboration with bitsandbytes and Hugging Face 🤗 open sourced code enabling the usage of FSDP+QLoRA and explained the whole process in their insightful blogpost [You can now train a 70b language model at home](https://www.answer.ai/posts/2024-03-06-fsdp-qlora.html). This is now integrated in Hugging Face ecosystem.
|
| 168 |
+
|
| 169 |
+
For this, we first need `bitsandbytes>=0.43.3`, `accelerate>=1.0.1`, `transformers>4.44.2`, `trl>0.11.4` and `peft>0.13.0`. We need to set `fsdp_cpu_ram_efficient_loading=true`, `fsdp_use_orig_params=false` and `fsdp_offload_params=true`(cpu offloading) when using Accelerate config. When not using accelerate launcher, you can alternately set the environment variable `export FSDP_CPU_RAM_EFFICIENT_LOADING=true`. Here, we will be using accelerate config and below is the config which can be found at [fsdp_config_qlora.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/fsdp_config_qlora.yaml):
|
| 170 |
+
|
| 171 |
+
```yml
|
| 172 |
+
compute_environment: LOCAL_MACHINE
|
| 173 |
+
debug: false
|
| 174 |
+
distributed_type: FSDP
|
| 175 |
+
downcast_bf16: 'no'
|
| 176 |
+
fsdp_config:
|
| 177 |
+
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
|
| 178 |
+
fsdp_backward_prefetch: BACKWARD_PRE
|
| 179 |
+
fsdp_cpu_ram_efficient_loading: true
|
| 180 |
+
fsdp_forward_prefetch: false
|
| 181 |
+
fsdp_offload_params: true
|
| 182 |
+
fsdp_sharding_strategy: FULL_SHARD
|
| 183 |
+
fsdp_state_dict_type: SHARDED_STATE_DICT
|
| 184 |
+
fsdp_sync_module_states: true
|
| 185 |
+
fsdp_use_orig_params: false
|
| 186 |
+
machine_rank: 0
|
| 187 |
+
main_training_function: main
|
| 188 |
+
mixed_precision: 'no'
|
| 189 |
+
num_machines: 1
|
| 190 |
+
num_processes: 2
|
| 191 |
+
rdzv_backend: static
|
| 192 |
+
same_network: true
|
| 193 |
+
tpu_env: []
|
| 194 |
+
tpu_use_cluster: false
|
| 195 |
+
tpu_use_sudo: false
|
| 196 |
+
use_cpu: false
|
| 197 |
+
```
|
| 198 |
+
|
| 199 |
+
Launch command is given below which is available at [run_peft_qlora_fsdp.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_qlora_fsdp.sh):
|
| 200 |
+
```
|
| 201 |
+
accelerate launch --config_file "configs/fsdp_config_qlora.yaml" train.py \
|
| 202 |
+
--seed 100 \
|
| 203 |
+
--model_name_or_path "meta-llama/Llama-2-70b-hf" \
|
| 204 |
+
--dataset_name "smangrul/ultrachat-10k-chatml" \
|
| 205 |
+
--chat_template_format "chatml" \
|
| 206 |
+
--add_special_tokens False \
|
| 207 |
+
--append_concat_token False \
|
| 208 |
+
--splits "train,test" \
|
| 209 |
+
--max_seq_len 2048 \
|
| 210 |
+
--num_train_epochs 1 \
|
| 211 |
+
--logging_steps 5 \
|
| 212 |
+
--log_level "info" \
|
| 213 |
+
--logging_strategy "steps" \
|
| 214 |
+
--eval_strategy "epoch" \
|
| 215 |
+
--save_strategy "epoch" \
|
| 216 |
+
--push_to_hub \
|
| 217 |
+
--hub_private_repo True \
|
| 218 |
+
--hub_strategy "every_save" \
|
| 219 |
+
--bf16 True \
|
| 220 |
+
--packing True \
|
| 221 |
+
--learning_rate 1e-4 \
|
| 222 |
+
--lr_scheduler_type "cosine" \
|
| 223 |
+
--weight_decay 1e-4 \
|
| 224 |
+
--warmup_ratio 0.0 \
|
| 225 |
+
--max_grad_norm 1.0 \
|
| 226 |
+
--output_dir "llama-sft-qlora-fsdp" \
|
| 227 |
+
--per_device_train_batch_size 2 \
|
| 228 |
+
--per_device_eval_batch_size 2 \
|
| 229 |
+
--gradient_accumulation_steps 2 \
|
| 230 |
+
--gradient_checkpointing True \
|
| 231 |
+
--use_reentrant True \
|
| 232 |
+
--dataset_text_field "content" \
|
| 233 |
+
--use_flash_attn True \
|
| 234 |
+
--use_peft_lora True \
|
| 235 |
+
--lora_r 8 \
|
| 236 |
+
--lora_alpha 16 \
|
| 237 |
+
--lora_dropout 0.1 \
|
| 238 |
+
--lora_target_modules "all-linear" \
|
| 239 |
+
--use_4bit_quantization True \
|
| 240 |
+
--use_nested_quant True \
|
| 241 |
+
--bnb_4bit_compute_dtype "bfloat16" \
|
| 242 |
+
--bnb_4bit_quant_storage_dtype "bfloat16"
|
| 243 |
+
```
|
| 244 |
+
|
| 245 |
+
Notice the new argument being passed, `bnb_4bit_quant_storage_dtype`, which denotes the data type for packing the 4-bit parameters. For example, when it is set to `bfloat16`, **16/4 = 4** 4-bit params are packed together post quantization. When using mixed precision training with `bfloat16`, `bnb_4bit_quant_storage_dtype` can be either `bfloat16` for pure `bfloat16` finetuning, or `float32` for automatic mixed precision (this consumes more GPU memory). When using mixed precision training with `float16`, `bnb_4bit_quant_storage_dtype` should be set to `float32` for stable automatic mixed precision training.
|
| 246 |
+
|
| 247 |
+
In terms of training code, the important code changes are:
|
| 248 |
+
|
| 249 |
+
```diff
|
| 250 |
+
...
|
| 251 |
+
|
| 252 |
+
bnb_config = BitsAndBytesConfig(
|
| 253 |
+
load_in_4bit=args.use_4bit_quantization,
|
| 254 |
+
bnb_4bit_quant_type=args.bnb_4bit_quant_type,
|
| 255 |
+
bnb_4bit_compute_dtype=compute_dtype,
|
| 256 |
+
bnb_4bit_use_double_quant=args.use_nested_quant,
|
| 257 |
+
+ bnb_4bit_quant_storage=quant_storage_dtype,
|
| 258 |
+
)
|
| 259 |
+
|
| 260 |
+
...
|
| 261 |
+
|
| 262 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 263 |
+
args.model_name_or_path,
|
| 264 |
+
quantization_config=bnb_config,
|
| 265 |
+
trust_remote_code=True,
|
| 266 |
+
attn_implementation="flash_attention_2" if args.use_flash_attn else "eager",
|
| 267 |
+
+ torch_dtype=quant_storage_dtype or torch.float32,
|
| 268 |
+
)
|
| 269 |
+
```
|
| 270 |
+
|
| 271 |
+
Notice that `torch_dtype` for `AutoModelForCausalLM` is same as the `bnb_4bit_quant_storage` data type. That's it. Everything else is handled by Trainer and TRL.
|
| 272 |
+
|
| 273 |
+
## Memory usage
|
| 274 |
+
|
| 275 |
+
In the above example, the memory consumed per GPU is **19.6 GB** while CPU RAM usage is around **107 GB**. When disabling CPU offloading, the GPU memory usage is **35.6 GB/ GPU**. Therefore, what took 16X80GB GPUs for full finetuning, 8X80GB GPUs with FSDP+LoRA, and a couple of 80GB GPUs with DDP+QLoRA, now requires 2X24GB GPUs. This makes finetuning of large models more accessible.
|
| 276 |
+
|
| 277 |
+
## More resources
|
| 278 |
+
You can also refer the [llama-recipes](https://github.com/facebookresearch/llama-recipes/?tab=readme-ov-file#fine-tuning) repo and [Getting started with Llama](https://llama.meta.com/get-started/#fine-tuning) guide on how to finetune using FSDP and PEFT.
|
| 279 |
+
|
| 280 |
+
## Caveats
|
| 281 |
+
1. Merging when using PEFT and FSDP is currently unsupported and will raise error.
|
| 282 |
+
2. Passing `modules_to_save` config parameter to is untested at present.
|
| 283 |
+
3. GPU Memory saving when using CPU Offloading is untested at present.
|
| 284 |
+
4. When using FSDP+QLoRA, `paged_adamw_8bit` currently results in an error when saving a checkpoint.
|
| 285 |
+
5. DoRA training with FSDP should work (albeit at lower speed than LoRA). If combined with bitsandbytes (QDoRA), 4-bit quantization should also work, but 8-bit quantization has known issues and is not recommended.
|
peft/docs/source/conceptual_guides/adapter.md
ADDED
|
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
|
| 12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 13 |
+
rendered properly in your Markdown viewer.
|
| 14 |
+
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# Adapters
|
| 18 |
+
|
| 19 |
+
Adapter-based methods add extra trainable parameters after the attention and fully-connected layers of a frozen pretrained model to reduce memory-usage and speed up training. The method varies depending on the adapter, it could simply be an extra added layer or it could be expressing the weight updates ∆W as a low-rank decomposition of the weight matrix. Either way, the adapters are typically small but demonstrate comparable performance to a fully finetuned model and enable training larger models with fewer resources.
|
| 20 |
+
|
| 21 |
+
This guide will give you a brief overview of the adapter methods supported by PEFT (if you're interested in learning more details about a specific method, take a look at the linked paper).
|
| 22 |
+
|
| 23 |
+
## Low-Rank Adaptation (LoRA)
|
| 24 |
+
|
| 25 |
+
> [!TIP]
|
| 26 |
+
> LoRA is one of the most popular PEFT methods and a good starting point if you're just getting started with PEFT. It was originally developed for large language models but it is a tremendously popular training method for diffusion models because of its efficiency and effectiveness.
|
| 27 |
+
|
| 28 |
+
As mentioned briefly earlier, [LoRA](https://hf.co/papers/2106.09685) is a technique that accelerates finetuning large models while consuming less memory.
|
| 29 |
+
|
| 30 |
+
LoRA represents the weight updates ∆W with two smaller matrices (called *update matrices*) through low-rank decomposition. These new matrices can be trained to adapt to the new data while keeping the overall number of parameters low. The original weight matrix remains frozen and doesn't receive any further updates. To produce the final results, the original and extra adapted weights are combined. You could also merge the adapter weights with the base model to eliminate inference latency.
|
| 31 |
+
|
| 32 |
+
<div class="flex justify-center">
|
| 33 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_animated.gif"/>
|
| 34 |
+
</div>
|
| 35 |
+
|
| 36 |
+
This approach has a number of advantages:
|
| 37 |
+
|
| 38 |
+
* LoRA makes finetuning more efficient by drastically reducing the number of trainable parameters.
|
| 39 |
+
* The original pretrained weights are kept frozen, which means you can have multiple lightweight and portable LoRA models for various downstream tasks built on top of them.
|
| 40 |
+
* LoRA is orthogonal to other parameter-efficient methods and can be combined with many of them.
|
| 41 |
+
* Performance of models finetuned using LoRA is comparable to the performance of fully finetuned models.
|
| 42 |
+
|
| 43 |
+
In principle, LoRA can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. However, for simplicity and further parameter efficiency, LoRA is typically only applied to the attention blocks in Transformer models. The resulting number of trainable parameters in a LoRA model depends on the size of the update matrices, which is determined mainly by the rank `r` and the shape of the original weight matrix.
|
| 44 |
+
|
| 45 |
+
<div class="flex justify-center">
|
| 46 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora.png"/>
|
| 47 |
+
</div>
|
| 48 |
+
<small><a href="https://hf.co/papers/2103.10385">Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation</a></small>
|
| 49 |
+
|
| 50 |
+
## Mixture of LoRA Experts (X-LoRA)
|
| 51 |
+
|
| 52 |
+
[X-LoRA](https://huggingface.co/papers/2402.07148) is a mixture of experts method for LoRA which works by using dense or sparse gating to dynamically activate LoRA experts. The LoRA experts as well as the base model are frozen during training, resulting in a low parameter count as only the gating layers must be trained. In particular, the gating layers output scalings which (depending on config) are granular on the layer and token level. Additionally, during inference, X-LoRA dynamically activates LoRA adapters to recall knowledge and effectively mix them:
|
| 53 |
+
|
| 54 |
+
The below graphic demonstrates how the scalings change for different prompts for each token. This highlights the activation of different adapters as the generation progresses and the sequence creates new context.
|
| 55 |
+
|
| 56 |
+

|
| 57 |
+
|
| 58 |
+
For each step, X-LoRA requires the base model to be run twice: first, to get hidden states without any LoRA adapters, and secondly, the hidden states are used to calculate scalings which are applied to the LoRA adapters and the model is run a second time. The output of the second run is the result of the model step.
|
| 59 |
+
|
| 60 |
+
Ultimately, X-LoRA allows the model to reflect upon its knowledge because of the dual forward pass scheme, and dynamically reconfigure the architecture.
|
| 61 |
+
|
| 62 |
+
## Low-Rank Hadamard Product (LoHa)
|
| 63 |
+
|
| 64 |
+
Low-rank decomposition can impact performance because the weight updates are limited to the low-rank space, which can constrain a model's expressiveness. However, you don't necessarily want to use a larger rank because it increases the number of trainable parameters. To address this, [LoHa](https://huggingface.co/papers/2108.06098) (a method originally developed for computer vision) was applied to diffusion models where the ability to generate diverse images is an important consideration. LoHa should also work with general model types, but the embedding layers aren't currently implemented in PEFT.
|
| 65 |
+
|
| 66 |
+
LoHa uses the [Hadamard product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)) (element-wise product) instead of the matrix product. ∆W is represented by four smaller matrices instead of two - like in LoRA - and each pair of these low-rank matrices are combined with the Hadamard product. As a result, ∆W can have the same number of trainable parameters but a higher rank and expressivity.
|
| 67 |
+
|
| 68 |
+
## Low-Rank Kronecker Product (LoKr)
|
| 69 |
+
|
| 70 |
+
[LoKr](https://hf.co/papers/2309.14859) is very similar to LoRA and LoHa, and it is also mainly applied to diffusion models, though you could also use it with other model types. LoKr replaces the matrix product with the [Kronecker product](https://en.wikipedia.org/wiki/Kronecker_product) instead. The Kronecker product decomposition creates a block matrix which preserves the rank of the original weight matrix. Another benefit of the Kronecker product is that it can be vectorized by stacking the matrix columns. This can speed up the process because you're avoiding fully reconstructing ∆W.
|
| 71 |
+
|
| 72 |
+
## Orthogonal Finetuning (OFT)
|
| 73 |
+
|
| 74 |
+
<div class="flex justify-center">
|
| 75 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/oft.png"/>
|
| 76 |
+
</div>
|
| 77 |
+
<small><a href="https://hf.co/papers/2306.07280">Controlling Text-to-Image Diffusion by Orthogonal Finetuning</a></small>
|
| 78 |
+
|
| 79 |
+
[OFT](https://hf.co/papers/2306.07280) is a method that primarily focuses on preserving a pretrained model's generative performance in the finetuned model. It tries to maintain the same cosine similarity (hyperspherical energy) between all pairwise neurons in a layer because this better captures the semantic information among neurons. This means OFT is more capable at preserving the subject and it is better for controllable generation (similar to [ControlNet](https://huggingface.co/docs/diffusers/using-diffusers/controlnet)).
|
| 80 |
+
|
| 81 |
+
OFT preserves the hyperspherical energy by learning an orthogonal transformation for neurons to keep the cosine similarity between them unchanged. In practice, this means taking the matrix product of an orthogonal matrix with the pretrained weight matrix. However, to be parameter-efficient, the orthogonal matrix is represented as a block-diagonal matrix with rank `r` blocks. Whereas LoRA reduces the number of trainable parameters with low-rank structures, OFT reduces the number of trainable parameters with a sparse block-diagonal matrix structure.
|
| 82 |
+
|
| 83 |
+
## Orthogonal Butterfly (BOFT)
|
| 84 |
+
|
| 85 |
+
[BOFT](https://hf.co/papers/2311.06243) is an improved orthogonal finetuning method that focuses on preserving a pretrained model's generative capabilities while being significantly more parameter-efficient than standard OFT. Like OFT, BOFT maintains the same cosine similarity (hyperspherical energy) between all pairwise neurons in a layer by applying an orthogonal transformation to the pretrained weight matrix, ensuring the semantic relationships among neurons are preserved.
|
| 86 |
+
|
| 87 |
+
Instead of using a block-diagonal orthogonal matrix, BOFT factorizes the orthogonal transformation into a product of **sparse butterfly matrices** (originally introduced in the [Cooley–Tukey FFT](https://en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm)). Unlike OFT's block-diagonal rotations, which only mix inputs within each block, the butterfly structure guarantees that every input can influence every output, producing a **dense connectivity** with just `O(d log d)` parameters. This factorization preserves expressivity while drastically reducing the parameter count compared to OFT (at the expense of computation time).
|
| 88 |
+
|
| 89 |
+
In practice, BOFT multiplies each pretrained weight matrix by a sequence of butterfly-structured orthogonal factors, enabling efficient and expressive neuron rotations. This makes BOFT well-suited for controllable generation and tasks where maintaining the pretrained model's subject representation is critical, while also scaling to larger models with lower memory and compute overhead.
|
| 90 |
+
|
| 91 |
+
## Adaptive Low-Rank Adaptation (AdaLoRA)
|
| 92 |
+
|
| 93 |
+
[AdaLoRA](https://hf.co/papers/2303.10512) manages the parameter budget introduced from LoRA by allocating more parameters - in other words, a higher rank `r` - for important weight matrices that are better adapted for a task and pruning less important ones. The rank is controlled by a method similar to singular value decomposition (SVD). The ∆W is parameterized with two orthogonal matrices and a diagonal matrix which contains singular values. This parametrization method avoids iteratively applying SVD which is computationally expensive. Based on this method, the rank of ∆W is adjusted according to an importance score. ∆W is divided into triplets and each triplet is scored according to its contribution to model performance. Triplets with low importance scores are pruned and triplets with high importance scores are kept for finetuning.
|
| 94 |
+
|
| 95 |
+
Training with AdaLoRA has three phases: the init phase, the budgeting phase and the final phase. In the initial phase, no budgeting is applied, therefore the ranks are not touched. During the budgeting phase the process described above is applied and the rank is redistributed according to a budget, aiming to give more important adapters more rank and less important layers less. When reaching the final phase, budgeting has ended, the ranks are redistributed but we may continue training for a while with the redistributed ranks to further improve performance.
|
| 96 |
+
|
| 97 |
+
## Llama-Adapter
|
| 98 |
+
|
| 99 |
+
[Llama-Adapter](https://hf.co/papers/2303.16199) is a method for adapting Llama into an instruction-following model. To help adapt the model for instruction-following, the adapter is trained with a 52K instruction-output dataset.
|
| 100 |
+
|
| 101 |
+
A set of learnable adaption prompts are prefixed to the input instruction tokens. These are inserted into the upper layers of the model because it is better to learn with the higher-level semantics of the pretrained model. The instruction-output tokens prefixed to the input guide the adaption prompt to generate a contextual response.
|
| 102 |
+
|
| 103 |
+
<div class="flex justify-center">
|
| 104 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/llama-adapter.png"/>
|
| 105 |
+
</div>
|
| 106 |
+
<small><a href="https://hf.co/papers/2303.16199">LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention</a></small>
|
| 107 |
+
|
| 108 |
+
To avoid adding noise to the tokens, the adapter uses zero-initialized attention. On top of this, the adapter adds a learnable gating factor (initialized with zeros) to progressively add information to the model during training. This prevents overwhelming the model's pretrained knowledge with the newly learned instructions.
|
| 109 |
+
|
| 110 |
+
## Householder Reflection Adaptation (HRA)
|
| 111 |
+
|
| 112 |
+
[HRA](https://huggingface.co/papers/2405.17484) provides a new perspective connecting LoRA to OFT, which means it can harness the advantages of both strategies, reduce parameters and computation costs while penalizing the loss of pre-training knowledge.
|
| 113 |
+
|
| 114 |
+
<div class="flex justify-center">
|
| 115 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/hra.png"/>
|
| 116 |
+
</div>
|
| 117 |
+
<small><a href="https://huggingface.co/papers/2405.17484">Bridging The Gap between Low-rank and Orthogonal Adaptation via Householder Reflection Adaptation</a></small>
|
| 118 |
+
|
| 119 |
+
HRA constructs a chain of `r` trainable Householder reflections (HRs). Because the Householder reflection matrix is an orthogonal matrix and the product of orthogonal matrices is also an orthogonal matrix, HRA satisfies the theoretical guarantee of Orthogonal Finetuning (OFT). Meanwhile, HRA can also be viewed as a low-rank fine-tuning adapter by rewriting formula.
|
| 120 |
+
|
| 121 |
+
The higher `r`, the more trainable parameters, resulting in a larger model capacity and better performance. Besides, due to the chain structure, the orthogonality of HR planes impacts the capacity and regularity of HRA. To achieve a trade-off between the model capacity and regularity, an orthogonality regularizer of the HR planes is added to the loss function. The weight \\(\lambda\\) can control the strength of the regularizer.
|
| 122 |
+
|
| 123 |
+
## Bone
|
| 124 |
+
[MiSS](https://huggingface.co/papers/2409.15371) New version of paper(MiSS: Balancing LoRA Performance and Efficiency with Simple Shard Sharing)
|
| 125 |
+
If you already have a Bone checkpoint, you can use `/scripts/convert-bone-to-miss.py` to convert it into a MiSS checkpoint and proceed with training using MiSS.
|
| 126 |
+
|
| 127 |
+
## MiSS
|
| 128 |
+
[MiSS](https://huggingface.co/papers/2409.15371) MiSS (Matrix Shard Sharing) is a novel Parameter-Efficient Fine-Tuning (PEFT) method designed to address the trade-off between adaptability and efficiency in Large Language Models. The core approach of MiSS involves a simple shard-sharing mechanism. It achieves low-rank adaptation by decomposing a weight matrix into multiple fragments and then utilizing a shared, trainable "common fragment." The final low-rank update matrix is constructed by replicating these shared, partitioned shards. (MiSS is a novel PEFT method that adopts a low-rank structure, requires only a single trainable matrix, and introduces a new update mechanism distinct from LoRA, achieving an excellent balance between performance and efficiency.)
|
| 129 |
+
|
| 130 |
+
<small><a href="https://huggingface.co/papers/2409.15371">MiSS: Balancing LoRA Performance and Efficiency with Simple Shard Sharing</a></small>
|
| 131 |
+
|
| 132 |
+
Intuitively, the shape of a single trainable matrix in MiSS is consistent with `lora_B`, so the `r` parameter in MiSS is less than the `r` in LoRA by (`in_feature * r`).
|
| 133 |
+
|
| 134 |
+
Note: Bat's r (b) is special and requires that weight W satisfies the conditions `in_features % r == 0` and `out_features % r == 0`. Additionally, when `in_features == out_features` and MiSS-r equals LoRA-r, MiSS's number of trainable parameters is only half that of LoRA.
|
| 135 |
+
|
| 136 |
+
Although the nonlinear updates of Bat bring some performance improvements, they also increase computational overhead. Its main purpose is to provide researchers with a direction for improvement. Therefore, we recommend fine-tuning the comprehensive MiSS model instead.
|
peft/docs/source/conceptual_guides/ia3.md
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
|
| 12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 13 |
+
rendered properly in your Markdown viewer.
|
| 14 |
+
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# IA3
|
| 18 |
+
|
| 19 |
+
This conceptual guide gives a brief overview of [IA3](https://huggingface.co/papers/2205.05638), a parameter-efficient fine tuning technique that is
|
| 20 |
+
intended to improve over [LoRA](./lora).
|
| 21 |
+
|
| 22 |
+
To make fine-tuning more efficient, IA3 (Infused Adapter by Inhibiting and Amplifying Inner Activations)
|
| 23 |
+
rescales inner activations with learned vectors. These learned vectors are injected in the attention and feedforward modules
|
| 24 |
+
in a typical transformer-based architecture. These learned vectors are the only trainable parameters during fine-tuning, and thus the original
|
| 25 |
+
weights remain frozen. Dealing with learned vectors (as opposed to learned low-rank updates to a weight matrix like LoRA)
|
| 26 |
+
keeps the number of trainable parameters much smaller.
|
| 27 |
+
|
| 28 |
+
Being similar to LoRA, IA3 carries many of the same advantages:
|
| 29 |
+
|
| 30 |
+
* IA3 makes fine-tuning more efficient by drastically reducing the number of trainable parameters. (For T0, an IA3 model only has about 0.01% trainable parameters, while even LoRA has > 0.1%)
|
| 31 |
+
* The original pre-trained weights are kept frozen, which means you can have multiple lightweight and portable IA3 models for various downstream tasks built on top of them.
|
| 32 |
+
* Performance of models fine-tuned using IA3 is comparable to the performance of fully fine-tuned models.
|
| 33 |
+
* IA3 does not add any inference latency because adapter weights can be merged with the base model.
|
| 34 |
+
|
| 35 |
+
In principle, IA3 can be applied to any subset of weight matrices in a neural network to reduce the number of trainable
|
| 36 |
+
parameters. Following the authors' implementation, IA3 weights are added to the key, value and feedforward layers
|
| 37 |
+
of a Transformer model. To be specific, for transformer models, IA3 weights are added to the outputs of key and value layers, and to the input of the second feedforward layer
|
| 38 |
+
in each transformer block.
|
| 39 |
+
|
| 40 |
+
Given the target layers for injecting IA3 parameters, the number of trainable parameters
|
| 41 |
+
can be determined based on the size of the weight matrices.
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
## Common IA3 parameters in PEFT
|
| 45 |
+
|
| 46 |
+
As with other methods supported by PEFT, to fine-tune a model using IA3, you need to:
|
| 47 |
+
|
| 48 |
+
1. Instantiate a base model.
|
| 49 |
+
2. Create a configuration (`IA3Config`) where you define IA3-specific parameters.
|
| 50 |
+
3. Wrap the base model with `get_peft_model()` to get a trainable `PeftModel`.
|
| 51 |
+
4. Train the `PeftModel` as you normally would train the base model.
|
| 52 |
+
|
| 53 |
+
`IA3Config` allows you to control how IA3 is applied to the base model through the following parameters:
|
| 54 |
+
|
| 55 |
+
- `target_modules`: The modules (for example, attention blocks) to apply the IA3 vectors.
|
| 56 |
+
- `feedforward_modules`: The list of modules to be treated as feedforward layers in `target_modules`. While learned vectors are multiplied with
|
| 57 |
+
the output activation for attention blocks, the vectors are multiplied with the input for classic feedforward layers. Note that `feedforward_modules` must be a subset of `target_modules`.
|
| 58 |
+
- `modules_to_save`: List of modules apart from IA3 layers to be set as trainable and saved in the final checkpoint. These typically include model's custom head that is randomly initialized for the fine-tuning task.
|
| 59 |
+
|
| 60 |
+
## Example Usage
|
| 61 |
+
|
| 62 |
+
For the task of sequence classification, one can initialize the IA3 config for a Llama model as follows:
|
| 63 |
+
|
| 64 |
+
```py
|
| 65 |
+
peft_config = IA3Config(
|
| 66 |
+
task_type=TaskType.SEQ_CLS, target_modules=["k_proj", "v_proj", "down_proj"], feedforward_modules=["down_proj"]
|
| 67 |
+
)
|
| 68 |
+
```
|
peft/docs/source/conceptual_guides/oft.md
ADDED
|
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
|
| 12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 13 |
+
rendered properly in your Markdown viewer.
|
| 14 |
+
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# Orthogonal Finetuning (OFT and BOFT)
|
| 18 |
+
|
| 19 |
+
This conceptual guide gives a brief overview of [OFT](https://huggingface.co/papers/2306.07280), [OFTv2](https://www.arxiv.org/abs/2506.19847) and [BOFT](https://huggingface.co/papers/2311.06243), a parameter-efficient fine-tuning technique that utilizes orthogonal matrix to multiplicatively transform the pretrained weight matrices.
|
| 20 |
+
|
| 21 |
+
To achieve efficient fine-tuning, OFT represents the weight updates with an orthogonal transformation. The orthogonal transformation is parameterized by an orthogonal matrix multiplied to the pretrained weight matrix. These new matrices can be trained to adapt to the new data while keeping the overall number of changes low. The original weight matrix remains frozen and doesn't receive any further adjustments. To produce the final results, both the original and the adapted weights are multiplied togethor.
|
| 22 |
+
|
| 23 |
+
Orthogonal Butterfly (BOFT) generalizes OFT with Butterfly factorization and further improves its parameter efficiency and finetuning flexibility. In short, OFT can be viewed as a special case of BOFT. Different from LoRA that uses additive low-rank weight updates, BOFT uses multiplicative orthogonal weight updates. The comparison is shown below.
|
| 24 |
+
|
| 25 |
+
<div class="flex justify-center">
|
| 26 |
+
<img src="https://raw.githubusercontent.com/wy1iu/butterfly-oft/main/assets/BOFT_comparison.png"/>
|
| 27 |
+
</div>
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
BOFT has some advantages compared to LoRA:
|
| 31 |
+
|
| 32 |
+
* BOFT proposes a simple yet generic way to finetune pretrained models to downstream tasks, yielding a better preservation of pretraining knowledge and a better parameter efficiency.
|
| 33 |
+
* Through the orthogonality, BOFT introduces a structural constraint, i.e., keeping the [hyperspherical energy](https://huggingface.co/papers/1805.09298) unchanged during finetuning. This can effectively reduce the forgetting of pretraining knowledge.
|
| 34 |
+
* BOFT uses the butterfly factorization to efficiently parameterize the orthogonal matrix, which yields a compact yet expressive learning space (i.e., hypothesis class).
|
| 35 |
+
* The sparse matrix decomposition in BOFT brings in additional inductive biases that are beneficial to generalization.
|
| 36 |
+
|
| 37 |
+
In principle, BOFT can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. Given the target layers for injecting BOFT parameters, the number of trainable parameters can be determined based on the size of the weight matrices.
|
| 38 |
+
|
| 39 |
+
## Merge OFT/BOFT weights into the base model
|
| 40 |
+
|
| 41 |
+
Similar to LoRA, the weights learned by OFT/BOFT can be integrated into the pretrained weight matrices using the merge_and_unload() function. This function merges the adapter weights with the base model which allows you to effectively use the newly merged model as a standalone model.
|
| 42 |
+
|
| 43 |
+
<div class="flex justify-center">
|
| 44 |
+
<img src="https://raw.githubusercontent.com/wy1iu/butterfly-oft/main/assets/boft_merge.png"/>
|
| 45 |
+
</div>
|
| 46 |
+
|
| 47 |
+
This works because during training, the orthogonal weight matrix (R in the diagram above) and the pretrained weight matrices are separate. But once training is complete, these weights can actually be merged (multiplied) into a new weight matrix that is equivalent.
|
| 48 |
+
|
| 49 |
+
## Utils for OFT / BOFT
|
| 50 |
+
|
| 51 |
+
### Common OFT / BOFT parameters in PEFT
|
| 52 |
+
|
| 53 |
+
As with other methods supported by PEFT, to fine-tune a model using OFT or BOFT, you need to:
|
| 54 |
+
|
| 55 |
+
1. Instantiate a base model.
|
| 56 |
+
2. Create a configuration (`OFTConfig` or `BOFTConfig`) where you define OFT/BOFT-specific parameters.
|
| 57 |
+
3. Wrap the base model with `get_peft_model()` to get a trainable `PeftModel`.
|
| 58 |
+
4. Train the `PeftModel` as you normally would train the base model.
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
### OFT-specific parameters
|
| 62 |
+
|
| 63 |
+
`OFTConfig` allows you to control how OFT is applied to the base model through the following parameters:
|
| 64 |
+
|
| 65 |
+
- `r`: OFT rank, number of OFT blocks per injected layer. **Bigger** `r` results in more sparse update matrices with **fewer** trainable paramters. **Note**: You can only specify either `r` or `oft_block_size`, but not both simultaneously, because `r` × `oft_block_size` = layer dimension. For simplicity, we let the user speficy either `r` or `oft_block_size` and infer the other one. Default set to `r = 0`, the user is advised to set the `oft_block_size` instead for better clarity.
|
| 66 |
+
- `oft_block_size`: OFT block size across different layers. **Bigger** `oft_block_size` results in more dense update matrices with **more** trainable parameters. **Note**: Please choose `oft_block_size` to be divisible by layer's input dimension (`in_features`), e.g., 4, 8, 16. You can only specify either `r` or `oft_block_size`, but not both simultaneously, because `r` × `oft_block_size` = layer dimension. For simplicity, we let the user speficy either `r` or `oft_block_size` and infer the other one. Default set to `oft_block_size = 32`.
|
| 67 |
+
- `use_cayley_neumann`: Specifies whether to use the Cayley-Neumann parameterization (efficient but approximate) or the vanilla Cayley parameterization (exact but computationally expensive because of matrix inverse). We recommend to set it to `True` for better efficiency, but performance may be slightly worse because of the approximation error. Please test both settings (`True` and `False`) depending on your needs. Default is `False`.
|
| 68 |
+
- `module_dropout`: The multiplicative dropout probability, by setting OFT blocks to identity during training, similar to the dropout layer in LoRA.
|
| 69 |
+
- `bias`: specify if the `bias` parameters should be trained. Can be `"none"`, `"all"` or `"oft_only"`.
|
| 70 |
+
- `target_modules`: The modules (for example, attention blocks) to inject the OFT matrices.
|
| 71 |
+
- `modules_to_save`: List of modules apart from OFT matrices to be set as trainable and saved in the final checkpoint. These typically include model's custom head that is randomly initialized for the fine-tuning task.
|
| 72 |
+
|
| 73 |
+
### BOFT-specific parameters
|
| 74 |
+
|
| 75 |
+
`BOFTConfig` allows you to control how BOFT is applied to the base model through the following parameters:
|
| 76 |
+
|
| 77 |
+
- `boft_block_size`: the BOFT matrix block size across different layers, expressed in `int`. **Bigger** `boft_block_size` results in more dense update matrices with **more** trainable parameters. **Note**, please choose `boft_block_size` to be divisible by most layer's input dimension (`in_features`), e.g., 4, 8, 16. Also, please only
|
| 78 |
+
specify either `boft_block_size` or `boft_block_num`, but not both simultaneously or leaving both to 0, because `boft_block_size` x `boft_block_num` must equal the layer's input dimension.
|
| 79 |
+
- `boft_block_num`: the number of BOFT matrix blocks across different layers, expressed in `int`. **Bigger** `boft_block_num` result in sparser update matrices with **fewer** trainable parameters. **Note**, please choose `boft_block_num` to be divisible by most layer's input dimension (`in_features`), e.g., 4, 8, 16. Also, please only
|
| 80 |
+
specify either `boft_block_size` or `boft_block_num`, but not both simultaneously or leaving both to 0, because `boft_block_size` x `boft_block_num` must equal the layer's input dimension.
|
| 81 |
+
- `boft_n_butterfly_factor`: the number of butterfly factors. **Note**, for `boft_n_butterfly_factor=1`, BOFT is the same as vanilla OFT, for `boft_n_butterfly_factor=2`, the effective block size of OFT becomes twice as big and the number of blocks become half.
|
| 82 |
+
- `bias`: specify if the `bias` parameters should be trained. Can be `"none"`, `"all"` or `"boft_only"`.
|
| 83 |
+
- `boft_dropout`: specify the probability of multiplicative dropout.
|
| 84 |
+
- `target_modules`: The modules (for example, attention blocks) to inject the OFT/BOFT matrices.
|
| 85 |
+
- `modules_to_save`: List of modules apart from OFT/BOFT matrices to be set as trainable and saved in the final checkpoint. These typically include model's custom head that is randomly initialized for the fine-tuning task.
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
|
| 89 |
+
## OFT Example Usage
|
| 90 |
+
|
| 91 |
+
For using OFT for quantized finetuning with [TRL](https://github.com/huggingface/trl) for `SFT`, `PPO`, or `DPO` fine-tuning, follow the following outline:
|
| 92 |
+
|
| 93 |
+
```py
|
| 94 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
|
| 95 |
+
from trl import SFTTrainer
|
| 96 |
+
from peft import OFTConfig
|
| 97 |
+
|
| 98 |
+
if use_quantization:
|
| 99 |
+
bnb_config = BitsAndBytesConfig(
|
| 100 |
+
load_in_4bit=True,
|
| 101 |
+
bnb_4bit_quant_type="nf4",
|
| 102 |
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
| 103 |
+
bnb_4bit_use_double_quant=True,
|
| 104 |
+
bnb_4bit_quant_storage=torch.bfloat16,
|
| 105 |
+
)
|
| 106 |
+
|
| 107 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 108 |
+
"model_name",
|
| 109 |
+
quantization_config=bnb_config
|
| 110 |
+
)
|
| 111 |
+
tokenizer = AutoTokenizer.from_pretrained("model_name")
|
| 112 |
+
|
| 113 |
+
# Configure OFT
|
| 114 |
+
peft_config = OFTConfig(
|
| 115 |
+
oft_block_size=32,
|
| 116 |
+
use_cayley_neumann=True,
|
| 117 |
+
target_modules="all-linear",
|
| 118 |
+
bias="none",
|
| 119 |
+
task_type="CAUSAL_LM"
|
| 120 |
+
)
|
| 121 |
+
|
| 122 |
+
trainer = SFTTrainer(
|
| 123 |
+
model=model,
|
| 124 |
+
train_dataset=ds['train'],
|
| 125 |
+
peft_config=peft_config,
|
| 126 |
+
processing_class=tokenizer,
|
| 127 |
+
args=training_arguments,
|
| 128 |
+
data_collator=collator,
|
| 129 |
+
)
|
| 130 |
+
|
| 131 |
+
trainer.train()
|
| 132 |
+
```
|
| 133 |
+
|
| 134 |
+
|
| 135 |
+
## BOFT Example Usage
|
| 136 |
+
|
| 137 |
+
For an example of the BOFT method application to various downstream tasks, please refer to the following guides:
|
| 138 |
+
|
| 139 |
+
Take a look at the following step-by-step guides on how to finetune a model with BOFT:
|
| 140 |
+
- [Dreambooth finetuning with BOFT](https://github.com/huggingface/peft/blob/main/examples/boft_dreambooth/boft_dreambooth.md)
|
| 141 |
+
- [Controllable generation finetuning with BOFT (ControlNet)](https://github.com/huggingface/peft/blob/main/examples/boft_controlnet/boft_controlnet.md)
|
| 142 |
+
|
| 143 |
+
For the task of image classification, one can initialize the BOFT config for a DinoV2 model as follows:
|
| 144 |
+
|
| 145 |
+
```py
|
| 146 |
+
import transformers
|
| 147 |
+
from transformers import AutoModelForSeq2SeqLM, BOFTConfig
|
| 148 |
+
from peft import BOFTConfig, get_peft_model
|
| 149 |
+
|
| 150 |
+
config = BOFTConfig(
|
| 151 |
+
boft_block_size=4,
|
| 152 |
+
boft_n_butterfly_factor=2,
|
| 153 |
+
target_modules=["query", "value", "key", "output.dense", "mlp.fc1", "mlp.fc2"],
|
| 154 |
+
boft_dropout=0.1,
|
| 155 |
+
bias="boft_only",
|
| 156 |
+
modules_to_save=["classifier"],
|
| 157 |
+
)
|
| 158 |
+
|
| 159 |
+
model = transformers.Dinov2ForImageClassification.from_pretrained(
|
| 160 |
+
"facebook/dinov2-large",
|
| 161 |
+
num_labels=100,
|
| 162 |
+
)
|
| 163 |
+
|
| 164 |
+
boft_model = get_peft_model(model, config)
|
| 165 |
+
```
|
peft/docs/source/conceptual_guides/prompting.md
ADDED
|
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 2 |
+
rendered properly in your Markdown viewer.
|
| 3 |
+
-->
|
| 4 |
+
|
| 5 |
+
# Soft prompts
|
| 6 |
+
|
| 7 |
+
Training large pretrained language models is very time-consuming and compute-intensive. As they continue to grow in size, there is increasing interest in more efficient training methods such as *prompting*. Prompting primes a frozen pretrained model for a specific downstream task by including a text prompt that describes the task or even demonstrates an example of the task. With prompting, you can avoid fully training a separate model for each downstream task, and use the same frozen pretrained model instead. This is a lot easier because you can use the same model for several different tasks, and it is significantly more efficient to train and store a smaller set of prompt parameters than to train all the model's parameters.
|
| 8 |
+
|
| 9 |
+
There are two categories of prompting methods:
|
| 10 |
+
|
| 11 |
+
- hard prompts are manually handcrafted text prompts with discrete input tokens; the downside is that it requires a lot of effort to create a good prompt
|
| 12 |
+
- soft prompts are learnable tensors concatenated with the input embeddings that can be optimized to a dataset; the downside is that they aren't human readable because you aren't matching these "virtual tokens" to the embeddings of a real word
|
| 13 |
+
|
| 14 |
+
This conceptual guide provides a brief overview of the soft prompt methods included in 🤗 PEFT: prompt tuning, prefix tuning, P-tuning, and multitask prompt tuning.
|
| 15 |
+
|
| 16 |
+
## Prompt tuning
|
| 17 |
+
|
| 18 |
+
<div class="flex justify-center">
|
| 19 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/prompt-tuning.png"/>
|
| 20 |
+
</div>
|
| 21 |
+
<small>Only train and store a significantly smaller set of task-specific prompt parameters <a href="https://hf.co/papers/2104.08691">(image source)</a>.</small>
|
| 22 |
+
|
| 23 |
+
[Prompt tuning](https://hf.co/papers/2104.08691) was developed for text classification tasks on T5 models, and all downstream tasks are cast as a text generation task. For example, sequence classification usually assigns a single class label to a sequence of text. By casting it as a text generation task, the tokens that make up the class label are *generated*. Prompts are added to the input as a series of tokens. Typically, the model parameters are fixed which means the prompt tokens are also fixed by the model parameters.
|
| 24 |
+
|
| 25 |
+
The key idea behind prompt tuning is that prompt tokens have their own parameters that are updated independently. This means you can keep the pretrained model's parameters frozen, and only update the gradients of the prompt token embeddings. The results are comparable to the traditional method of training the entire model, and prompt tuning performance scales as model size increases.
|
| 26 |
+
|
| 27 |
+
Take a look at [Prompt tuning for causal language modeling](../task_guides/clm-prompt-tuning) for a step-by-step guide on how to train a model with prompt tuning.
|
| 28 |
+
|
| 29 |
+
## Prefix tuning
|
| 30 |
+
|
| 31 |
+
<div class="flex justify-center">
|
| 32 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/prefix-tuning.png"/>
|
| 33 |
+
</div>
|
| 34 |
+
<small>Optimize the prefix parameters for each task <a href="https://hf.co/papers/2101.00190">(image source)</a>.</small>
|
| 35 |
+
|
| 36 |
+
[Prefix tuning](https://hf.co/papers/2101.00190) was designed for natural language generation (NLG) tasks on GPT models. It is very similar to prompt tuning; prefix tuning also prepends a sequence of task-specific vectors to the input that can be trained and updated while keeping the rest of the pretrained model's parameters frozen.
|
| 37 |
+
|
| 38 |
+
The main difference is that the prefix parameters are inserted in **all** of the model layers, whereas prompt tuning only adds the prompt parameters to the model input embeddings. The prefix parameters are also optimized by a separate feed-forward network (FFN) instead of training directly on the soft prompts because it causes instability and hurts performance. The FFN is discarded after updating the soft prompts.
|
| 39 |
+
|
| 40 |
+
As a result, the authors found that prefix tuning demonstrates comparable performance to fully finetuning a model, despite having 1000x fewer parameters, and it performs even better in low-data settings.
|
| 41 |
+
|
| 42 |
+
Take a look at [Prefix tuning for conditional generation](../task_guides/seq2seq-prefix-tuning) for a step-by-step guide on how to train a model with prefix tuning.
|
| 43 |
+
|
| 44 |
+
## P-tuning
|
| 45 |
+
|
| 46 |
+
<div class="flex justify-center">
|
| 47 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/p-tuning.png"/>
|
| 48 |
+
</div>
|
| 49 |
+
<small>Prompt tokens can be inserted anywhere in the input sequence, and they are optimized by a prompt encoder <a href="https://hf.co/papers/2103.10385">(image source)</a>.</small>
|
| 50 |
+
|
| 51 |
+
[P-tuning](https://hf.co/papers/2103.10385) is designed for natural language understanding (NLU) tasks and all language models.
|
| 52 |
+
It is another variation of a soft prompt method; P-tuning also adds a trainable embedding tensor that can be optimized to find better prompts, and it uses a prompt encoder (a bidirectional long-short term memory network or LSTM) to optimize the prompt parameters. Unlike prefix tuning though:
|
| 53 |
+
|
| 54 |
+
- the prompt tokens can be inserted anywhere in the input sequence, and it isn't restricted to only the beginning
|
| 55 |
+
- the prompt tokens are only added to the input instead of adding them to every layer of the model
|
| 56 |
+
- introducing *anchor* tokens can improve performance because they indicate characteristics of a component in the input sequence
|
| 57 |
+
|
| 58 |
+
The results suggest that P-tuning is more efficient than manually crafting prompts, and it enables GPT-like models to compete with BERT-like models on NLU tasks.
|
| 59 |
+
|
| 60 |
+
Take a look at [P-tuning for sequence classification](../task_guides/ptuning-seq-classification) for a step-by-step guide on how to train a model with P-tuning.
|
| 61 |
+
|
| 62 |
+
## Multitask prompt tuning
|
| 63 |
+
|
| 64 |
+
<div class="flex justify-center">
|
| 65 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/mpt.png"/>
|
| 66 |
+
</div>
|
| 67 |
+
<small><a href="https://hf.co/papers/2303.02861">Multitask prompt tuning enables parameter-efficient transfer learning</a>.</small>
|
| 68 |
+
|
| 69 |
+
[Multitask prompt tuning (MPT)](https://hf.co/papers/2303.02861) learns a single prompt from data for multiple task types that can be shared for different target tasks. Other existing approaches learn a separate soft prompt for each task that need to be retrieved or aggregated for adaptation to target tasks. MPT consists of two stages:
|
| 70 |
+
|
| 71 |
+
1. source training - for each task, its soft prompt is decomposed into task-specific vectors. The task-specific vectors are multiplied together to form another matrix W, and the Hadamard product is used between W and a shared prompt matrix P to generate a task-specific prompt matrix. The task-specific prompts are distilled into a single prompt matrix that is shared across all tasks. This prompt is trained with multitask training.
|
| 72 |
+
2. target adaptation - to adapt the single prompt for a target task, a target prompt is initialized and expressed as the Hadamard product of the shared prompt matrix and the task-specific low-rank prompt matrix.
|
| 73 |
+
|
| 74 |
+
<div class="flex justify-center">
|
| 75 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/mpt-decomposition.png"/>
|
| 76 |
+
</div>
|
| 77 |
+
<small><a href="https://hf.co/papers/2103.10385">Prompt decomposition</a>.</small>
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
## Context-Aware Prompt Tuning (CPT)
|
| 81 |
+
|
| 82 |
+
<div class="flex justify-center">
|
| 83 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/cpt.png"/>
|
| 84 |
+
</div>
|
| 85 |
+
<small>CPT optimizing only specific token embeddings while keeping the rest of the model frozen <a href="https://huggingface.co/papers/2410.17222">(image source)</a>.</small>
|
| 86 |
+
|
| 87 |
+
[Context-Aware Prompt Tuning (CPT)](https://huggingface.co/papers/2410.17222) is designed to enhance few-shot classification by refining only context embeddings.
|
| 88 |
+
This approach combines ideas from In-Context Learning (ICL), Prompt Tuning (PT), and adversarial optimization, focusing on making model adaptation both parameter-efficient and effective.
|
| 89 |
+
In CPT, only specific context token embeddings are optimized, while the rest of the model remains frozen.
|
| 90 |
+
To prevent overfitting and maintain stability, CPT uses controlled perturbations to limit the allowed changes to context embeddings within a defined range.
|
| 91 |
+
Additionally, to address the phenomenon of recency bias—where examples near the end of the context tend to be prioritized over earlier ones—CPT applies a decay loss factor.
|
| 92 |
+
|
| 93 |
+
Take a look at [Example](https://github.com/huggingface/peft/blob/main/examples/cpt_finetuning/README.md) for a step-by-step guide on how to train a model with CPT.
|
peft/docs/source/developer_guides/checkpoint.md
ADDED
|
@@ -0,0 +1,244 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
|
| 12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 13 |
+
rendered properly in your Markdown viewer.
|
| 14 |
+
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# PEFT checkpoint format
|
| 18 |
+
|
| 19 |
+
This document describes how PEFT's checkpoint files are structured and how to convert between the PEFT format and other formats.
|
| 20 |
+
|
| 21 |
+
## PEFT files
|
| 22 |
+
|
| 23 |
+
PEFT (parameter-efficient fine-tuning) methods only update a small subset of a model's parameters rather than all of them. This is nice because checkpoint files can generally be much smaller than the original model files and are easier to store and share. However, this also means that to load a PEFT model, you need to have the original model available as well.
|
| 24 |
+
|
| 25 |
+
When you call [`~PeftModel.save_pretrained`] on a PEFT model, the PEFT model saves three files, described below:
|
| 26 |
+
|
| 27 |
+
1. `adapter_model.safetensors` or `adapter_model.bin`
|
| 28 |
+
|
| 29 |
+
By default, the model is saved in the `safetensors` format, a secure alternative to the `bin` format, which is known to be susceptible to [security vulnerabilities](https://huggingface.co/docs/hub/security-pickle) because it uses the pickle utility under the hood. Both formats store the same `state_dict` though, and are interchangeable.
|
| 30 |
+
|
| 31 |
+
The `state_dict` only contains the parameters of the adapter module, not the base model. To illustrate the difference in size, a normal BERT model requires ~420MB of disk space, whereas an IA³ adapter on top of this BERT model only requires ~260KB.
|
| 32 |
+
|
| 33 |
+
2. `adapter_config.json`
|
| 34 |
+
|
| 35 |
+
The `adapter_config.json` file contains the configuration of the adapter module, which is necessary to load the model. Below is an example of an `adapter_config.json` for an IA³ adapter with standard settings applied to a BERT model:
|
| 36 |
+
|
| 37 |
+
```json
|
| 38 |
+
{
|
| 39 |
+
"auto_mapping": {
|
| 40 |
+
"base_model_class": "BertModel",
|
| 41 |
+
"parent_library": "transformers.models.bert.modeling_bert"
|
| 42 |
+
},
|
| 43 |
+
"base_model_name_or_path": "bert-base-uncased",
|
| 44 |
+
"fan_in_fan_out": false,
|
| 45 |
+
"feedforward_modules": [
|
| 46 |
+
"output.dense"
|
| 47 |
+
],
|
| 48 |
+
"inference_mode": true,
|
| 49 |
+
"init_ia3_weights": true,
|
| 50 |
+
"modules_to_save": null,
|
| 51 |
+
"peft_type": "IA3",
|
| 52 |
+
"revision": null,
|
| 53 |
+
"target_modules": [
|
| 54 |
+
"key",
|
| 55 |
+
"value",
|
| 56 |
+
"output.dense"
|
| 57 |
+
],
|
| 58 |
+
"task_type": null
|
| 59 |
+
}
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
The configuration file contains:
|
| 63 |
+
|
| 64 |
+
- the adapter module type stored, `"peft_type": "IA3"`
|
| 65 |
+
- information about the base model like `"base_model_name_or_path": "bert-base-uncased"`
|
| 66 |
+
- the revision of the model (if any), `"revision": null`
|
| 67 |
+
|
| 68 |
+
If the base model is not a pretrained Transformers model, the latter two entries will be `null`. Other than that, the settings are all related to the specific IA³ adapter that was used to fine-tune the model.
|
| 69 |
+
|
| 70 |
+
3. `README.md`
|
| 71 |
+
|
| 72 |
+
The generated `README.md` is the model card of a PEFT model and contains a few pre-filled entries. The intent of this is to make it easier to share the model with others and to provide some basic information about the model. This file is not needed to load the model.
|
| 73 |
+
|
| 74 |
+
## Convert to PEFT format
|
| 75 |
+
|
| 76 |
+
When converting from another format to the PEFT format, we require both the `adapter_model.safetensors` (or `adapter_model.bin`) file and the `adapter_config.json` file.
|
| 77 |
+
|
| 78 |
+
### adapter_model
|
| 79 |
+
|
| 80 |
+
For the model weights, it is important to use the correct mapping from parameter name to value for PEFT to load the file. Getting this mapping right is an exercise in checking the implementation details, as there is no generally agreed upon format for PEFT adapters.
|
| 81 |
+
|
| 82 |
+
Fortunately, figuring out this mapping is not overly complicated for common base cases. Let's look at a concrete example, the [`LoraLayer`](https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/layer.py):
|
| 83 |
+
|
| 84 |
+
```python
|
| 85 |
+
# showing only part of the code
|
| 86 |
+
|
| 87 |
+
class LoraLayer(BaseTunerLayer):
|
| 88 |
+
# All names of layers that may contain (trainable) adapter weights
|
| 89 |
+
adapter_layer_names = ("lora_A", "lora_B", "lora_embedding_A", "lora_embedding_B")
|
| 90 |
+
# All names of other parameters that may contain adapter-related parameters
|
| 91 |
+
other_param_names = ("r", "lora_alpha", "scaling", "lora_dropout")
|
| 92 |
+
|
| 93 |
+
def __init__(self, base_layer: nn.Module, **kwargs) -> None:
|
| 94 |
+
self.base_layer = base_layer
|
| 95 |
+
self.r = {}
|
| 96 |
+
self.lora_alpha = {}
|
| 97 |
+
self.scaling = {}
|
| 98 |
+
self.lora_dropout = nn.ModuleDict({})
|
| 99 |
+
self.lora_A = nn.ModuleDict({})
|
| 100 |
+
self.lora_B = nn.ModuleDict({})
|
| 101 |
+
# For Embedding layer
|
| 102 |
+
self.lora_embedding_A = nn.ParameterDict({})
|
| 103 |
+
self.lora_embedding_B = nn.ParameterDict({})
|
| 104 |
+
# Mark the weight as unmerged
|
| 105 |
+
self._disable_adapters = False
|
| 106 |
+
self.merged_adapters = []
|
| 107 |
+
self.use_dora: dict[str, bool] = {}
|
| 108 |
+
self.lora_magnitude_vector: Optional[torch.nn.ParameterDict] = None # for DoRA
|
| 109 |
+
self._caches: dict[str, Any] = {}
|
| 110 |
+
self.kwargs = kwargs
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
In the `__init__` code used by all `LoraLayer` classes in PEFT, there are a bunch of parameters used to initialize the model, but only a few are relevant for the checkpoint file: `lora_A`, `lora_B`, `lora_embedding_A`, and `lora_embedding_B`. These parameters are listed in the class attribute `adapter_layer_names` and contain the learnable parameters, so they must be included in the checkpoint file. All the other parameters, like the rank `r`, are derived from the `adapter_config.json` and must be included there (unless the default value is used).
|
| 114 |
+
|
| 115 |
+
Let's check the `state_dict` of a PEFT LoRA model applied to BERT. When printing the first five keys using the default LoRA settings (the remaining keys are the same, just with different layer numbers), we get:
|
| 116 |
+
|
| 117 |
+
- `base_model.model.encoder.layer.0.attention.self.query.lora_A.weight`
|
| 118 |
+
- `base_model.model.encoder.layer.0.attention.self.query.lora_B.weight`
|
| 119 |
+
- `base_model.model.encoder.layer.0.attention.self.value.lora_A.weight`
|
| 120 |
+
- `base_model.model.encoder.layer.0.attention.self.value.lora_B.weight`
|
| 121 |
+
- `base_model.model.encoder.layer.1.attention.self.query.lora_A.weight`
|
| 122 |
+
- etc.
|
| 123 |
+
|
| 124 |
+
Let's break this down:
|
| 125 |
+
|
| 126 |
+
- By default, for BERT models, LoRA is applied to the `query` and `value` layers of the attention module. This is why you see `attention.self.query` and `attention.self.value` in the key names for each layer.
|
| 127 |
+
- LoRA decomposes the weights into two low-rank matrices, `lora_A` and `lora_B`. This is where `lora_A` and `lora_B` come from in the key names.
|
| 128 |
+
- These LoRA matrices are implemented as `nn.Linear` layers, so the parameters are stored in the `.weight` attribute (`lora_A.weight`, `lora_B.weight`).
|
| 129 |
+
- By default, LoRA isn't applied to BERT's embedding layer, so there are _no entries_ for `lora_A_embedding` and `lora_B_embedding`.
|
| 130 |
+
- The keys of the `state_dict` always start with `"base_model.model."`. The reason is that, in PEFT, we wrap the base model inside a tuner-specific model (`LoraModel` in this case), which itself is wrapped in a general PEFT model (`PeftModel`). For this reason, these two prefixes are added to the keys. When converting to the PEFT format, it is required to add these prefixes.
|
| 131 |
+
|
| 132 |
+
> [!TIP]
|
| 133 |
+
> This last point is not true for prefix tuning techniques like prompt tuning. There, the extra embeddings are directly stored in the `state_dict` without any prefixes added to the keys.
|
| 134 |
+
|
| 135 |
+
When inspecting the parameter names in the loaded model, you might be surprised to find that they look a bit different, e.g. `base_model.model.encoder.layer.0.attention.self.query.lora_A.default.weight`. The difference is the *`.default`* part in the second to last segment. This part exists because PEFT generally allows the addition of multiple adapters at once (using an `nn.ModuleDict` or `nn.ParameterDict` to store them). For example, if you add another adapter called "other", the key for that adapter would be `base_model.model.encoder.layer.0.attention.self.query.lora_A.other.weight`.
|
| 136 |
+
|
| 137 |
+
When you call [`~PeftModel.save_pretrained`], the adapter name is stripped from the keys. The reason is that the adapter name is not an important part of the model architecture; it is just an arbitrary name. When loading the adapter, you could choose a totally different name, and the model would still work the same way. This is why the adapter name is not stored in the checkpoint file.
|
| 138 |
+
|
| 139 |
+
> [!TIP]
|
| 140 |
+
> If you call `save_pretrained("some/path")` and the adapter name is not `"default"`, the adapter is stored in a sub-directory with the same name as the adapter. So if the name is "other", it would be stored inside of `some/path/other`.
|
| 141 |
+
|
| 142 |
+
In some circumstances, deciding which values to add to the checkpoint file can become a bit more complicated. For example, in PEFT, DoRA is implemented as a special case of LoRA. If you want to convert a DoRA model to PEFT, you should create a LoRA checkpoint with extra entries for DoRA. You can see this in the `__init__` of the previous `LoraLayer` code:
|
| 143 |
+
|
| 144 |
+
```python
|
| 145 |
+
self.lora_magnitude_vector: Optional[torch.nn.ParameterDict] = None # for DoRA
|
| 146 |
+
```
|
| 147 |
+
|
| 148 |
+
This indicates that there is an optional extra parameter per layer for DoRA.
|
| 149 |
+
|
| 150 |
+
### adapter_config
|
| 151 |
+
|
| 152 |
+
All the other information needed to load a PEFT model is contained in the `adapter_config.json` file. Let's check this file for a LoRA model applied to BERT:
|
| 153 |
+
|
| 154 |
+
```json
|
| 155 |
+
{
|
| 156 |
+
"alpha_pattern": {},
|
| 157 |
+
"auto_mapping": {
|
| 158 |
+
"base_model_class": "BertModel",
|
| 159 |
+
"parent_library": "transformers.models.bert.modeling_bert"
|
| 160 |
+
},
|
| 161 |
+
"base_model_name_or_path": "bert-base-uncased",
|
| 162 |
+
"bias": "none",
|
| 163 |
+
"fan_in_fan_out": false,
|
| 164 |
+
"inference_mode": true,
|
| 165 |
+
"init_lora_weights": true,
|
| 166 |
+
"layer_replication": null,
|
| 167 |
+
"layers_pattern": null,
|
| 168 |
+
"layers_to_transform": null,
|
| 169 |
+
"loftq_config": {},
|
| 170 |
+
"lora_alpha": 8,
|
| 171 |
+
"lora_dropout": 0.0,
|
| 172 |
+
"megatron_config": null,
|
| 173 |
+
"megatron_core": "megatron.core",
|
| 174 |
+
"modules_to_save": null,
|
| 175 |
+
"peft_type": "LORA",
|
| 176 |
+
"r": 8,
|
| 177 |
+
"rank_pattern": {},
|
| 178 |
+
"revision": null,
|
| 179 |
+
"target_modules": [
|
| 180 |
+
"query",
|
| 181 |
+
"value"
|
| 182 |
+
],
|
| 183 |
+
"task_type": null,
|
| 184 |
+
"use_dora": false,
|
| 185 |
+
"use_rslora": false
|
| 186 |
+
}
|
| 187 |
+
```
|
| 188 |
+
|
| 189 |
+
This contains a lot of entries, and at first glance, it could feel overwhelming to figure out all the right values to put in there. However, most of the entries are not necessary to load the model. This is either because they use the default values and don't need to be added or because they only affect the initialization of the LoRA weights, which is irrelevant when it comes to loading the model. If you find that you don't know what a specific parameter does, e.g., `"use_rslora",` don't add it, and you should be fine. Also note that as more options are added, this file will get more entries in the future, but it should be backward compatible.
|
| 190 |
+
|
| 191 |
+
At the minimum, you should include the following entries:
|
| 192 |
+
|
| 193 |
+
```json
|
| 194 |
+
{
|
| 195 |
+
"target_modules": ["query", "value"],
|
| 196 |
+
"peft_type": "LORA"
|
| 197 |
+
}
|
| 198 |
+
```
|
| 199 |
+
|
| 200 |
+
However, adding as many entries as possible, like the rank `r` or the `base_model_name_or_path` (if it's a Transformers model) is recommended. This information can help others understand the model better and share it more easily. To check which keys and values are expected, check out the [config.py](https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/config.py) file (as an example, this is the config file for LoRA) in the PEFT source code.
|
| 201 |
+
|
| 202 |
+
## Model storage
|
| 203 |
+
|
| 204 |
+
In some circumstances, you might want to store the whole PEFT model, including the base weights. This can be necessary if, for instance, the base model is not available to the users trying to load the PEFT model. You can merge the weights first or convert it into a Transformer model.
|
| 205 |
+
|
| 206 |
+
### Merge the weights
|
| 207 |
+
|
| 208 |
+
The most straightforward way to store the whole PEFT model is to merge the adapter weights into the base weights:
|
| 209 |
+
|
| 210 |
+
```python
|
| 211 |
+
merged_model = model.merge_and_unload()
|
| 212 |
+
merged_model.save_pretrained(...)
|
| 213 |
+
```
|
| 214 |
+
|
| 215 |
+
There are some disadvantages to this approach, though:
|
| 216 |
+
|
| 217 |
+
- Once [`~LoraModel.merge_and_unload`] is called, you get a basic model without any PEFT-specific functionality. This means you can't use any of the PEFT-specific methods anymore.
|
| 218 |
+
- You cannot unmerge the weights, load multiple adapters at once, disable the adapter, etc.
|
| 219 |
+
- Not all PEFT methods support merging weights.
|
| 220 |
+
- Some PEFT methods may generally allow merging, but not with specific settings (e.g. when using certain quantization techniques).
|
| 221 |
+
- The whole model will be much larger than the PEFT model, as it will contain all the base weights as well.
|
| 222 |
+
|
| 223 |
+
But inference with a merged model should be a bit faster.
|
| 224 |
+
|
| 225 |
+
### Convert to a Transformers model
|
| 226 |
+
|
| 227 |
+
Another way to save the whole model, assuming the base model is a Transformers model, is to use this hacky approach to directly insert the PEFT weights into the base model and save it, which only works if you "trick" Transformers into believing the PEFT model is not a PEFT model. This only works with LoRA because other adapters are not implemented in Transformers.
|
| 228 |
+
|
| 229 |
+
```python
|
| 230 |
+
model = ... # the PEFT model
|
| 231 |
+
...
|
| 232 |
+
# after you finish training the model, save it in a temporary location
|
| 233 |
+
model.save_pretrained(<temp_location>)
|
| 234 |
+
# now load this model directly into a transformers model, without the PEFT wrapper
|
| 235 |
+
# the PEFT weights are directly injected into the base model
|
| 236 |
+
model_loaded = AutoModel.from_pretrained(<temp_location>)
|
| 237 |
+
# now make the loaded model believe that it is _not_ a PEFT model
|
| 238 |
+
model_loaded._hf_peft_config_loaded = False
|
| 239 |
+
# now when we save it, it will save the whole model
|
| 240 |
+
model_loaded.save_pretrained(<final_location>)
|
| 241 |
+
# or upload to Hugging Face Hub
|
| 242 |
+
model_loaded.push_to_hub(<final_location>)
|
| 243 |
+
```
|
| 244 |
+
|
peft/docs/source/developer_guides/contributing.md
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
|
| 12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 13 |
+
rendered properly in your Markdown viewer.
|
| 14 |
+
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# Contribute to PEFT
|
| 18 |
+
|
| 19 |
+
We are happy to accept contributions to PEFT. If you plan to contribute, please read this to make the process as smooth as possible.
|
| 20 |
+
|
| 21 |
+
## Installation
|
| 22 |
+
|
| 23 |
+
For code contributions to PEFT, you should choose the ["source"](../install#source) installation method.
|
| 24 |
+
|
| 25 |
+
If you are new to creating a pull request, follow the [Creating a pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) guide by GitHub.
|
| 26 |
+
|
| 27 |
+
## Tests and code quality checks
|
| 28 |
+
|
| 29 |
+
Regardless of the contribution type (unless it’s only about the docs), you should run tests and code quality checks before creating a PR to ensure your contribution doesn’t break anything and follows the project standards.
|
| 30 |
+
|
| 31 |
+
We provide a Makefile to execute the necessary tests. Run the code below for the unit test:
|
| 32 |
+
|
| 33 |
+
```sh
|
| 34 |
+
make test
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
Run one of the following to either only check or check and fix code quality and style:
|
| 38 |
+
|
| 39 |
+
```sh
|
| 40 |
+
make quality # just check
|
| 41 |
+
make style # check and fix
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
You can also set up [`pre-commit`](https://pre-commit.com/) to run these fixes
|
| 45 |
+
automatically as Git commit hooks.
|
| 46 |
+
|
| 47 |
+
```bash
|
| 48 |
+
$ pip install pre-commit
|
| 49 |
+
$ pre-commit install
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
Running all the tests can take a while, so during development it can be more efficient to only [run tests specific to your change](https://docs.pytest.org/en/6.2.x/usage.html#specifying-tests-selecting-tests), e.g. via:
|
| 53 |
+
|
| 54 |
+
```sh
|
| 55 |
+
pytest tests/<test-file-name> -k <name-of-test>
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
This should finish much quicker and allow for faster iteration.
|
| 59 |
+
|
| 60 |
+
If your change is specific to a hardware setting (e.g., it requires CUDA), take a look at [tests/test_gpu_examples.py](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/tests/test_gpu_examples.py) and [tests/test_common_gpu.py](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/tests/test_common_gpu.py) to see if it makes sense to add tests there. If your change could have an effect on saving and loading models, please run the tests with the `--regression` flag to trigger regression tests.
|
| 61 |
+
|
| 62 |
+
It can happen that while you’re working on your PR, the underlying code base changes due to other changes being merged. If that happens – especially when there is a merge conflict – please update your branch with the latest changes. This can be a merge or a rebase, and we'll squash and merge the PR once it’s ready. If possible, avoid force pushes to make reviews easier.
|
| 63 |
+
|
| 64 |
+
## PR description
|
| 65 |
+
|
| 66 |
+
When opening a PR, please provide a nice description of the change you're proposing. If it relates to other issues or PRs, please reference them. Providing a good description not only helps the reviewers review your code better and faster, it can also be used later (as a basis) for the commit message which helps with long term maintenance of the project.
|
| 67 |
+
|
| 68 |
+
If your code makes some non-trivial changes, it may also be a good idea to add comments to the code to explain those changes. For example, if you had to iterate on your implementation multiple times because the most obvious way didn’t work, it’s a good indication that a code comment is needed.
|
| 69 |
+
|
| 70 |
+
## Bugfixes
|
| 71 |
+
|
| 72 |
+
Please give a description of the circumstances that led to the bug. If there is an existing issue, please link to it (e.g., “Resolves #12345”).
|
| 73 |
+
|
| 74 |
+
Ideally when a bugfix is provided, it should be accompanied by a test for the bug. The test should fail with the current code and pass with the bugfix. Add a comment to the test that references the issue or PR. Without a test, it is more difficult to prevent regressions in the future.
|
| 75 |
+
|
| 76 |
+
## Add a new fine-tuning method
|
| 77 |
+
|
| 78 |
+
New parameter-efficient fine-tuning methods are developed all the time. If you would like to add a new and promising method to PEFT, please follow these steps.
|
| 79 |
+
|
| 80 |
+
1. Before you start to implement the new method, please open a [GitHub issue](https://github.com/huggingface/peft/issues) with your proposal. This way, the maintainers can give you some early feedback.
|
| 81 |
+
2. Please add a link to the source (usually a paper) of the method. The paper should be in a final state to avoid changing requirements during development (e.g. due to reviewer feedback).
|
| 82 |
+
3. When implementing the method, it makes sense to look for existing implementations that already exist as a guide. Moreover, when you structure your code, please take inspiration from the other PEFT methods. For example, if your method is similar to LoRA, it makes sense to structure your code similarly or even reuse some functions or classes where it makes sense (some code duplication is okay, but don’t overdo it).
|
| 83 |
+
4. Ideally, in addition to the implementation of the new method, there should also be
|
| 84 |
+
- [examples](https://github.com/huggingface/peft/tree/main/examples) (notebooks, scripts)
|
| 85 |
+
- [documentation](https://github.com/huggingface/peft/tree/main/docs/source)
|
| 86 |
+
- [extensive test suite](https://github.com/huggingface/peft/tree/main/tests) that proves the method correctly integrates with PEFT
|
| 87 |
+
- [experimental setup](https://github.com/huggingface/peft/tree/main/method_comparison#creating-new-experiments) to run benchmarks
|
| 88 |
+
5. Once you have something that seems to be working, don’t hesitate to create a draft PR even if it’s not in a mergeable state yet. The maintainers are happy to give you feedback and guidance along the way.
|
| 89 |
+
|
| 90 |
+
## Add other features
|
| 91 |
+
|
| 92 |
+
It is best if you first open an issue on GitHub with a proposal to add the new feature. This way, you can discuss with the maintainers if it makes sense to add the feature before spending too much time on implementing it.
|
| 93 |
+
|
| 94 |
+
New features should generally be accompanied by tests and documentation or examples. Without the latter, users will have a hard time discovering your cool new feature.
|
| 95 |
+
|
| 96 |
+
Changes to the code should be implemented in a backward-compatible way. For example, existing code should continue to work the same way after the feature is merged.
|
peft/docs/source/developer_guides/custom_models.md
ADDED
|
@@ -0,0 +1,304 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
|
| 12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 13 |
+
rendered properly in your Markdown viewer.
|
| 14 |
+
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# Custom models
|
| 18 |
+
|
| 19 |
+
Some fine-tuning techniques, such as prompt tuning, are specific to language models. That means in 🤗 PEFT, it is
|
| 20 |
+
assumed a 🤗 Transformers model is being used. However, other fine-tuning techniques - like
|
| 21 |
+
[LoRA](../conceptual_guides/lora) - are not restricted to specific model types.
|
| 22 |
+
|
| 23 |
+
In this guide, we will see how LoRA can be applied to a multilayer perceptron, a computer vision model from the [timm](https://huggingface.co/docs/timm/index) library, or a new 🤗 Transformers architecture.
|
| 24 |
+
|
| 25 |
+
## Multilayer perceptron
|
| 26 |
+
|
| 27 |
+
Let's assume that we want to fine-tune a multilayer perceptron with LoRA. Here is the definition:
|
| 28 |
+
|
| 29 |
+
```python
|
| 30 |
+
from torch import nn
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
class MLP(nn.Module):
|
| 34 |
+
def __init__(self, num_units_hidden=2000):
|
| 35 |
+
super().__init__()
|
| 36 |
+
self.seq = nn.Sequential(
|
| 37 |
+
nn.Linear(20, num_units_hidden),
|
| 38 |
+
nn.ReLU(),
|
| 39 |
+
nn.Linear(num_units_hidden, num_units_hidden),
|
| 40 |
+
nn.ReLU(),
|
| 41 |
+
nn.Linear(num_units_hidden, 2),
|
| 42 |
+
nn.LogSoftmax(dim=-1),
|
| 43 |
+
)
|
| 44 |
+
|
| 45 |
+
def forward(self, X):
|
| 46 |
+
return self.seq(X)
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
This is a straightforward multilayer perceptron with an input layer, a hidden layer, and an output layer.
|
| 50 |
+
|
| 51 |
+
> [!TIP]
|
| 52 |
+
> For this toy example, we choose an exceedingly large number of hidden units to highlight the efficiency gains
|
| 53 |
+
> from PEFT, but those gains are in line with more realistic examples.
|
| 54 |
+
|
| 55 |
+
There are a few linear layers in this model that could be tuned with LoRA. When working with common 🤗 Transformers
|
| 56 |
+
models, PEFT will know which layers to apply LoRA to, but in this case, it is up to us as a user to choose the layers.
|
| 57 |
+
To determine the names of the layers to tune:
|
| 58 |
+
|
| 59 |
+
```python
|
| 60 |
+
print([(n, type(m)) for n, m in MLP().named_modules()])
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
This should print:
|
| 64 |
+
|
| 65 |
+
```
|
| 66 |
+
[('', __main__.MLP),
|
| 67 |
+
('seq', torch.nn.modules.container.Sequential),
|
| 68 |
+
('seq.0', torch.nn.modules.linear.Linear),
|
| 69 |
+
('seq.1', torch.nn.modules.activation.ReLU),
|
| 70 |
+
('seq.2', torch.nn.modules.linear.Linear),
|
| 71 |
+
('seq.3', torch.nn.modules.activation.ReLU),
|
| 72 |
+
('seq.4', torch.nn.modules.linear.Linear),
|
| 73 |
+
('seq.5', torch.nn.modules.activation.LogSoftmax)]
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
Let's say we want to apply LoRA to the input layer and to the hidden layer, those are `'seq.0'` and `'seq.2'`. Moreover,
|
| 77 |
+
let's assume we want to update the output layer without LoRA, that would be `'seq.4'`. The corresponding config would
|
| 78 |
+
be:
|
| 79 |
+
|
| 80 |
+
```python
|
| 81 |
+
from peft import LoraConfig
|
| 82 |
+
|
| 83 |
+
config = LoraConfig(
|
| 84 |
+
target_modules=["seq.0", "seq.2"],
|
| 85 |
+
modules_to_save=["seq.4"],
|
| 86 |
+
)
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
With that, we can create our PEFT model and check the fraction of parameters trained:
|
| 90 |
+
|
| 91 |
+
```python
|
| 92 |
+
from peft import get_peft_model
|
| 93 |
+
|
| 94 |
+
model = MLP()
|
| 95 |
+
peft_model = get_peft_model(model, config)
|
| 96 |
+
peft_model.print_trainable_parameters()
|
| 97 |
+
# prints trainable params: 56,164 || all params: 4,100,164 || trainable%: 1.369798866581922
|
| 98 |
+
```
|
| 99 |
+
|
| 100 |
+
Finally, we can use any training framework we like, or write our own fit loop, to train the `peft_model`.
|
| 101 |
+
|
| 102 |
+
For a complete example, check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/multilayer_perceptron/multilayer_perceptron_lora.ipynb).
|
| 103 |
+
|
| 104 |
+
## timm models
|
| 105 |
+
|
| 106 |
+
The [timm](https://huggingface.co/docs/timm/index) library contains a large number of pretrained computer vision models.
|
| 107 |
+
Those can also be fine-tuned with PEFT. Let's check out how this works in practice.
|
| 108 |
+
|
| 109 |
+
To start, ensure that timm is installed in the Python environment:
|
| 110 |
+
|
| 111 |
+
```bash
|
| 112 |
+
python -m pip install -U timm
|
| 113 |
+
```
|
| 114 |
+
|
| 115 |
+
Next we load a timm model for an image classification task:
|
| 116 |
+
|
| 117 |
+
```python
|
| 118 |
+
import timm
|
| 119 |
+
|
| 120 |
+
num_classes = ...
|
| 121 |
+
model_id = "timm/poolformer_m36.sail_in1k"
|
| 122 |
+
model = timm.create_model(model_id, pretrained=True, num_classes=num_classes)
|
| 123 |
+
```
|
| 124 |
+
|
| 125 |
+
Again, we need to make a decision about what layers to apply LoRA to. Since LoRA supports 2D conv layers, and since
|
| 126 |
+
those are a major building block of this model, we should apply LoRA to the 2D conv layers. To identify the names of
|
| 127 |
+
those layers, let's look at all the layer names:
|
| 128 |
+
|
| 129 |
+
```python
|
| 130 |
+
print([(n, type(m)) for n, m in model.named_modules()])
|
| 131 |
+
```
|
| 132 |
+
|
| 133 |
+
This will print a very long list, we'll only show the first few:
|
| 134 |
+
|
| 135 |
+
```
|
| 136 |
+
[('', timm.models.metaformer.MetaFormer),
|
| 137 |
+
('stem', timm.models.metaformer.Stem),
|
| 138 |
+
('stem.conv', torch.nn.modules.conv.Conv2d),
|
| 139 |
+
('stem.norm', torch.nn.modules.linear.Identity),
|
| 140 |
+
('stages', torch.nn.modules.container.Sequential),
|
| 141 |
+
('stages.0', timm.models.metaformer.MetaFormerStage),
|
| 142 |
+
('stages.0.downsample', torch.nn.modules.linear.Identity),
|
| 143 |
+
('stages.0.blocks', torch.nn.modules.container.Sequential),
|
| 144 |
+
('stages.0.blocks.0', timm.models.metaformer.MetaFormerBlock),
|
| 145 |
+
('stages.0.blocks.0.norm1', timm.layers.norm.GroupNorm1),
|
| 146 |
+
('stages.0.blocks.0.token_mixer', timm.models.metaformer.Pooling),
|
| 147 |
+
('stages.0.blocks.0.token_mixer.pool', torch.nn.modules.pooling.AvgPool2d),
|
| 148 |
+
('stages.0.blocks.0.drop_path1', torch.nn.modules.linear.Identity),
|
| 149 |
+
('stages.0.blocks.0.layer_scale1', timm.models.metaformer.Scale),
|
| 150 |
+
('stages.0.blocks.0.res_scale1', torch.nn.modules.linear.Identity),
|
| 151 |
+
('stages.0.blocks.0.norm2', timm.layers.norm.GroupNorm1),
|
| 152 |
+
('stages.0.blocks.0.mlp', timm.layers.mlp.Mlp),
|
| 153 |
+
('stages.0.blocks.0.mlp.fc1', torch.nn.modules.conv.Conv2d),
|
| 154 |
+
('stages.0.blocks.0.mlp.act', torch.nn.modules.activation.GELU),
|
| 155 |
+
('stages.0.blocks.0.mlp.drop1', torch.nn.modules.dropout.Dropout),
|
| 156 |
+
('stages.0.blocks.0.mlp.norm', torch.nn.modules.linear.Identity),
|
| 157 |
+
('stages.0.blocks.0.mlp.fc2', torch.nn.modules.conv.Conv2d),
|
| 158 |
+
('stages.0.blocks.0.mlp.drop2', torch.nn.modules.dropout.Dropout),
|
| 159 |
+
('stages.0.blocks.0.drop_path2', torch.nn.modules.linear.Identity),
|
| 160 |
+
('stages.0.blocks.0.layer_scale2', timm.models.metaformer.Scale),
|
| 161 |
+
('stages.0.blocks.0.res_scale2', torch.nn.modules.linear.Identity),
|
| 162 |
+
('stages.0.blocks.1', timm.models.metaformer.MetaFormerBlock),
|
| 163 |
+
('stages.0.blocks.1.norm1', timm.layers.norm.GroupNorm1),
|
| 164 |
+
('stages.0.blocks.1.token_mixer', timm.models.metaformer.Pooling),
|
| 165 |
+
('stages.0.blocks.1.token_mixer.pool', torch.nn.modules.pooling.AvgPool2d),
|
| 166 |
+
...
|
| 167 |
+
('head.global_pool.flatten', torch.nn.modules.linear.Identity),
|
| 168 |
+
('head.norm', timm.layers.norm.LayerNorm2d),
|
| 169 |
+
('head.flatten', torch.nn.modules.flatten.Flatten),
|
| 170 |
+
('head.drop', torch.nn.modules.linear.Identity),
|
| 171 |
+
('head.fc', torch.nn.modules.linear.Linear)]
|
| 172 |
+
]
|
| 173 |
+
```
|
| 174 |
+
|
| 175 |
+
Upon closer inspection, we see that the 2D conv layers have names such as `"stages.0.blocks.0.mlp.fc1"` and
|
| 176 |
+
`"stages.0.blocks.0.mlp.fc2"`. How can we match those layer names specifically? You can write a [regular
|
| 177 |
+
expressions](https://docs.python.org/3/library/re.html) to match the layer names. For our case, the regex
|
| 178 |
+
`r".*\.mlp\.fc\d"` should do the job.
|
| 179 |
+
|
| 180 |
+
Furthermore, as in the first example, we should ensure that the output layer, in this case the classification head, is
|
| 181 |
+
also updated. Looking at the end of the list printed above, we can see that it's named `'head.fc'`. With that in mind,
|
| 182 |
+
here is our LoRA config:
|
| 183 |
+
|
| 184 |
+
```python
|
| 185 |
+
config = LoraConfig(target_modules=r".*\.mlp\.fc\d", modules_to_save=["head.fc"])
|
| 186 |
+
```
|
| 187 |
+
|
| 188 |
+
Then we only need to create the PEFT model by passing our base model and the config to `get_peft_model`:
|
| 189 |
+
|
| 190 |
+
```python
|
| 191 |
+
peft_model = get_peft_model(model, config)
|
| 192 |
+
peft_model.print_trainable_parameters()
|
| 193 |
+
# prints trainable params: 1,064,454 || all params: 56,467,974 || trainable%: 1.88505789139876
|
| 194 |
+
```
|
| 195 |
+
|
| 196 |
+
This shows us that we only need to train less than 2% of all parameters, which is a huge efficiency gain.
|
| 197 |
+
|
| 198 |
+
For a complete example, check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/image_classification/image_classification_timm_peft_lora.ipynb).
|
| 199 |
+
|
| 200 |
+
## New transformers architectures
|
| 201 |
+
|
| 202 |
+
When new popular transformers architectures are released, we do our best to quickly add them to PEFT. If you come across a transformers model that is not supported out of the box, don't worry, it will most likely still work if the config is set correctly. Specifically, you have to identify the layers that should be adapted and set them correctly when initializing the corresponding config class, e.g. `LoraConfig`. Here are some tips to help with this.
|
| 203 |
+
|
| 204 |
+
As a first step, it is a good idea to check the existing models for inspiration. You can find them inside of [constants.py](https://github.com/huggingface/peft/blob/main/src/peft/utils/constants.py) in the PEFT repository. Often, you'll find a similar architecture that uses the same names. For example, if the new model architecture is a variation of the "mistral" model and you want to apply LoRA, you can see that the entry for "mistral" in `TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING` contains `["q_proj", "v_proj"]`. This tells you that for "mistral" models, the `target_modules` for LoRA should be `["q_proj", "v_proj"]`:
|
| 205 |
+
|
| 206 |
+
```python
|
| 207 |
+
from peft import LoraConfig, get_peft_model
|
| 208 |
+
|
| 209 |
+
my_mistral_model = ...
|
| 210 |
+
config = LoraConfig(
|
| 211 |
+
target_modules=["q_proj", "v_proj"],
|
| 212 |
+
..., # other LoRA arguments
|
| 213 |
+
)
|
| 214 |
+
peft_model = get_peft_model(my_mistral_model, config)
|
| 215 |
+
```
|
| 216 |
+
|
| 217 |
+
If that doesn't help, check the existing modules in your model architecture with the `named_modules` method and try to identify the attention layers, especially the key, query, and value layers. Those will often have names such as `c_attn`, `query`, `q_proj`, etc. The key layer is not always adapted, and ideally, you should check whether including it results in better performance.
|
| 218 |
+
|
| 219 |
+
Additionally, linear layers are common targets to be adapted (e.g. in [QLoRA paper](https://huggingface.co/papers/2305.14314), authors suggest to adapt them as well). Their names will often contain the strings `fc` or `dense`.
|
| 220 |
+
|
| 221 |
+
If you want to add a new model to PEFT, please create an entry in [constants.py](https://github.com/huggingface/peft/blob/main/src/peft/utils/constants.py) and open a pull request on the [repository](https://github.com/huggingface/peft/pulls). Don't forget to update the [README](https://github.com/huggingface/peft#models-support-matrix) as well.
|
| 222 |
+
|
| 223 |
+
## Verify parameters and layers
|
| 224 |
+
|
| 225 |
+
You can verify whether you've correctly applied a PEFT method to your model in a few ways.
|
| 226 |
+
|
| 227 |
+
* Check the fraction of parameters that are trainable with the [`~PeftModel.print_trainable_parameters`] method. If this number is lower or higher than expected, check the model `repr` by printing the model. This shows the names of all the layer types in the model. Ensure that only the intended target layers are replaced by the adapter layers. For example, if LoRA is applied to `nn.Linear` layers, then you should only see `lora.Linear` layers being used.
|
| 228 |
+
|
| 229 |
+
```py
|
| 230 |
+
peft_model.print_trainable_parameters()
|
| 231 |
+
```
|
| 232 |
+
|
| 233 |
+
* Another way you can view the adapted layers is to use the `targeted_module_names` attribute to list the name of each module that was adapted.
|
| 234 |
+
|
| 235 |
+
```python
|
| 236 |
+
print(peft_model.targeted_module_names)
|
| 237 |
+
```
|
| 238 |
+
|
| 239 |
+
## Unsupported module types
|
| 240 |
+
|
| 241 |
+
Methods like LoRA only work if the target modules are supported by PEFT. For example, it's possible to apply LoRA to `nn.Linear` and `nn.Conv2d` layers, but not, for instance, to `nn.LSTM`. If you find a layer class you want to apply PEFT to is not supported, you can:
|
| 242 |
+
|
| 243 |
+
- define a custom mapping to dynamically dispatch custom modules in LoRA
|
| 244 |
+
- open an [issue](https://github.com/huggingface/peft/issues) and request the feature where maintainers will implement it or guide you on how to implement it yourself if demand for this module type is sufficiently high
|
| 245 |
+
|
| 246 |
+
### Experimental support for dynamic dispatch of custom modules in LoRA
|
| 247 |
+
|
| 248 |
+
> [!WARNING]
|
| 249 |
+
> This feature is experimental and subject to change, depending on its reception by the community. We will introduce a public and stable API if there is significant demand for it.
|
| 250 |
+
|
| 251 |
+
PEFT supports an experimental API for custom module types for LoRA. Let's assume you have a LoRA implementation for LSTMs. Normally, you would not be able to tell PEFT to use it, even if it would theoretically work with PEFT. However, this is possible with dynamic dispatch of custom layers.
|
| 252 |
+
|
| 253 |
+
The experimental API currently looks like this:
|
| 254 |
+
|
| 255 |
+
```python
|
| 256 |
+
class MyLoraLSTMLayer:
|
| 257 |
+
...
|
| 258 |
+
|
| 259 |
+
base_model = ... # load the base model that uses LSTMs
|
| 260 |
+
|
| 261 |
+
# add the LSTM layer names to target_modules
|
| 262 |
+
config = LoraConfig(..., target_modules=["lstm"])
|
| 263 |
+
# define a mapping from base layer type to LoRA layer type
|
| 264 |
+
custom_module_mapping = {nn.LSTM: MyLoraLSTMLayer}
|
| 265 |
+
# register the new mapping
|
| 266 |
+
config._register_custom_module(custom_module_mapping)
|
| 267 |
+
# after registration, create the PEFT model
|
| 268 |
+
peft_model = get_peft_model(base_model, config)
|
| 269 |
+
# do training
|
| 270 |
+
```
|
| 271 |
+
|
| 272 |
+
> [!TIP]
|
| 273 |
+
> When you call [`get_peft_model`], you will see a warning because PEFT does not recognize the targeted module type. In this case, you can ignore this warning.
|
| 274 |
+
|
| 275 |
+
By supplying a custom mapping, PEFT first checks the base model's layers against the custom mapping and dispatches to the custom LoRA layer type if there is a match. If there is no match, PEFT checks the built-in LoRA layer types for a match.
|
| 276 |
+
|
| 277 |
+
Therefore, this feature can also be used to override existing dispatch logic, e.g. if you want to use your own LoRA layer for `nn.Linear` instead of using the one provided by PEFT.
|
| 278 |
+
|
| 279 |
+
When creating your custom LoRA module, please follow the same rules as the [existing LoRA modules](https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/layer.py). Some important constraints to consider:
|
| 280 |
+
|
| 281 |
+
- The custom module should inherit from `nn.Module` and `peft.tuners.lora.layer.LoraLayer`.
|
| 282 |
+
- The `__init__` method of the custom module should have the positional arguments `base_layer` and `adapter_name`. After this, there are additional `**kwargs` that you are free to use or ignore.
|
| 283 |
+
- The learnable parameters should be stored in an `nn.ModuleDict` or `nn.ParameterDict`, where the key corresponds to the name of the specific adapter (remember that a model can have more than one adapter at a time).
|
| 284 |
+
- The name of these learnable parameter attributes should start with `"lora_"`, e.g. `self.lora_new_param = ...`.
|
| 285 |
+
- Some methods are optional, e.g. you only need to implement `merge` and `unmerge` if you want to support weight merging.
|
| 286 |
+
|
| 287 |
+
Currently, the information about the custom module does not persist when you save the model. When loading the model, you have to register the custom modules again.
|
| 288 |
+
|
| 289 |
+
```python
|
| 290 |
+
# saving works as always and includes the parameters of the custom modules
|
| 291 |
+
peft_model.save_pretrained(<model-path>)
|
| 292 |
+
|
| 293 |
+
# loading the model later:
|
| 294 |
+
base_model = ...
|
| 295 |
+
# load the LoRA config that you saved earlier
|
| 296 |
+
config = LoraConfig.from_pretrained(<model-path>)
|
| 297 |
+
# register the custom module again, the same way as the first time
|
| 298 |
+
custom_module_mapping = {nn.LSTM: MyLoraLSTMLayer}
|
| 299 |
+
config._register_custom_module(custom_module_mapping)
|
| 300 |
+
# pass the config instance to from_pretrained:
|
| 301 |
+
peft_model = PeftModel.from_pretrained(model, tmp_path / "lora-custom-module", config=config)
|
| 302 |
+
```
|
| 303 |
+
|
| 304 |
+
If you use this feature and find it useful, or if you encounter problems, let us know by creating an issue or a discussion on GitHub. This allows us to estimate the demand for this feature and add a public API if it is sufficiently high.
|
peft/docs/source/developer_guides/lora.md
ADDED
|
@@ -0,0 +1,822 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
|
| 12 |
+
⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
|
| 13 |
+
rendered properly in your Markdown viewer.
|
| 14 |
+
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# LoRA
|
| 18 |
+
|
| 19 |
+
LoRA is low-rank decomposition method to reduce the number of trainable parameters which speeds up finetuning large models and uses less memory. In PEFT, using LoRA is as easy as setting up a [`LoraConfig`] and wrapping it with [`get_peft_model`] to create a trainable [`PeftModel`].
|
| 20 |
+
|
| 21 |
+
This guide explores in more detail other options and features for using LoRA.
|
| 22 |
+
|
| 23 |
+
## Initialization
|
| 24 |
+
|
| 25 |
+
The initialization of LoRA weights is controlled by the parameter `init_lora_weights` in [`LoraConfig`]. By default, PEFT initializes LoRA weights with Kaiming-uniform for weight A and zeros for weight B resulting in an identity transform (same as the reference [implementation](https://github.com/microsoft/LoRA)).
|
| 26 |
+
|
| 27 |
+
It is also possible to pass `init_lora_weights="gaussian"`. As the name suggests, this initializes weight A with a Gaussian distribution and zeros for weight B (this is how [Diffusers](https://huggingface.co/docs/diffusers/index) initializes LoRA weights).
|
| 28 |
+
|
| 29 |
+
```py
|
| 30 |
+
from peft import LoraConfig
|
| 31 |
+
|
| 32 |
+
config = LoraConfig(init_lora_weights="gaussian", ...)
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
There is also an option to set `init_lora_weights=False` which is useful for debugging and testing. This should be the only time you use this option. When choosing this option, the LoRA weights are initialized such that they do *not* result in an identity transform.
|
| 36 |
+
|
| 37 |
+
```py
|
| 38 |
+
from peft import LoraConfig
|
| 39 |
+
|
| 40 |
+
config = LoraConfig(init_lora_weights=False, ...)
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
### PiSSA
|
| 44 |
+
[PiSSA](https://huggingface.co/papers/2404.02948) initializes the LoRA adapter using the principal singular values and singular vectors. This straightforward modification allows PiSSA to converge more rapidly than LoRA and ultimately attain superior performance. Moreover, PiSSA reduces the quantization error compared to QLoRA, leading to further enhancements.
|
| 45 |
+
|
| 46 |
+
Configure the initialization method to "pissa", which may take several minutes to execute SVD on the pre-trained model:
|
| 47 |
+
```python
|
| 48 |
+
from peft import LoraConfig
|
| 49 |
+
config = LoraConfig(init_lora_weights="pissa", ...)
|
| 50 |
+
```
|
| 51 |
+
Alternatively, execute fast SVD, which takes only a few seconds. The number of iterations determines the trade-off between the error and computation time:
|
| 52 |
+
```python
|
| 53 |
+
lora_config = LoraConfig(init_lora_weights="pissa_niter_[number of iters]", ...)
|
| 54 |
+
```
|
| 55 |
+
For detailed instruction on using PiSSA, please follow [these instructions](https://github.com/huggingface/peft/tree/main/examples/pissa_finetuning).
|
| 56 |
+
|
| 57 |
+
### CorDA
|
| 58 |
+
|
| 59 |
+
[CorDA](https://huggingface.co/papers/2406.05223) builds task-aware LoRA adapters from weight decomposition oriented by the context of downstream task to learn (instruction-previewed mode, IPM) or world knowledge to maintain (knowledge-preserved mode, KPM).
|
| 60 |
+
The KPM not only achieves better performance than LoRA on fine-tuning tasks, but also mitigates the catastrophic forgetting of pre-trained world knowledge.
|
| 61 |
+
When preserving pre-trained knowledge is not a concern,
|
| 62 |
+
the IPM is favored because it can further accelerate convergence and enhance the fine-tuning performance.
|
| 63 |
+
|
| 64 |
+
You need to configure the initialization method to "corda", and specify the mode of IPM or KPM and the dataset to collect covariance matrices.
|
| 65 |
+
|
| 66 |
+
```py
|
| 67 |
+
@torch.no_grad()
|
| 68 |
+
def run_model():
|
| 69 |
+
# Assume `model` and `dataset` is in context...
|
| 70 |
+
model.eval()
|
| 71 |
+
for batch in dataset:
|
| 72 |
+
model(**batch)
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
corda_config = CordaConfig(
|
| 76 |
+
corda_method="kpm",
|
| 77 |
+
)
|
| 78 |
+
lora_config = LoraConfig(
|
| 79 |
+
init_lora_weights="corda",
|
| 80 |
+
corda_config=corda_config,
|
| 81 |
+
)
|
| 82 |
+
preprocess_corda(model, lora_config, run_model=run_model)
|
| 83 |
+
peft_model = get_peft_model(model, lora_config)
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
For detailed instruction on using CorDA, please follow [these instructions](https://github.com/huggingface/peft/tree/main/examples/corda_finetuning).
|
| 87 |
+
|
| 88 |
+
### OLoRA
|
| 89 |
+
[OLoRA](https://huggingface.co/papers/2406.01775) utilizes QR decomposition to initialize the LoRA adapters. OLoRA translates the base weights of the model by a factor of their QR decompositions, i.e., it mutates the weights before performing any training on them. This approach significantly improves stability, accelerates convergence speed, and ultimately achieves superior performance.
|
| 90 |
+
|
| 91 |
+
You just need to pass a single additional option to use OLoRA:
|
| 92 |
+
```python
|
| 93 |
+
from peft import LoraConfig
|
| 94 |
+
config = LoraConfig(init_lora_weights="olora", ...)
|
| 95 |
+
```
|
| 96 |
+
For more advanced usage, please refer to our [documentation](https://github.com/huggingface/peft/tree/main/examples/olora_finetuning).
|
| 97 |
+
|
| 98 |
+
### EVA
|
| 99 |
+
[EVA](https://huggingface.co/papers/2410.07170) performs SVD on the input activations of each layer and uses the right-singular vectors to initialize LoRA weights. It is therefore a data-driven initialization scheme. Furthermore EVA adaptively allocates ranks across layers based on their "explained variance ratio" - a metric derived from the SVD analysis.
|
| 100 |
+
|
| 101 |
+
You can use EVA by setting `init_lora_weights="eva"` and defining [`EvaConfig`] in [`LoraConfig`]:
|
| 102 |
+
```python
|
| 103 |
+
from peft import LoraConfig, EvaConfig
|
| 104 |
+
peft_config = LoraConfig(
|
| 105 |
+
init_lora_weights = "eva",
|
| 106 |
+
eva_config = EvaConfig(rho = 2.0),
|
| 107 |
+
...
|
| 108 |
+
)
|
| 109 |
+
```
|
| 110 |
+
The parameter `rho` (≥ 1.0) determines how much redistribution is allowed. When `rho=1.0` and `r=16`, LoRA adapters are limited to exactly 16 ranks, preventing any redistribution from occurring. A recommended value for EVA with redistribution is 2.0, meaning the maximum rank allowed for a layer is 2r.
|
| 111 |
+
|
| 112 |
+
It is recommended to perform EVA initialization on an accelerator(e.g. CUDA GPU, Intel XPU) as it is much faster. To optimize the amount of available memory for EVA, you can use the `low_cpu_mem_usage` flag in [`get_peft_model`]:
|
| 113 |
+
```python
|
| 114 |
+
peft_model = get_peft_model(model, peft_config, low_cpu_mem_usage=True)
|
| 115 |
+
```
|
| 116 |
+
Then, call [`initialize_lora_eva_weights`] to initialize the EVA weights (in most cases the dataloader used for eva initialization can be the same as the one used for finetuning):
|
| 117 |
+
```python
|
| 118 |
+
initialize_lora_eva_weights(peft_model, dataloader)
|
| 119 |
+
```
|
| 120 |
+
EVA works out of the box with bitsandbytes. Simply initialize the model with `quantization_config` and call [`initialize_lora_eva_weights`] as usual.
|
| 121 |
+
|
| 122 |
+
> [!TIP]
|
| 123 |
+
> For further instructions on using EVA, please refer to our [documentation](https://github.com/huggingface/peft/tree/main/examples/eva_finetuning).
|
| 124 |
+
|
| 125 |
+
### LoftQ
|
| 126 |
+
|
| 127 |
+
#### Standard approach
|
| 128 |
+
|
| 129 |
+
When quantizing the base model for QLoRA training, consider using the [LoftQ initialization](https://huggingface.co/papers/2310.08659), which has been shown to improve performance when training quantized models. The idea is that the LoRA weights are initialized such that the quantization error is minimized. To use LoftQ, follow [these instructions](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning).
|
| 130 |
+
|
| 131 |
+
In general, for LoftQ to work best, it is recommended to target as many layers with LoRA as possible, since those not targeted cannot have LoftQ applied. This means that passing `LoraConfig(..., target_modules="all-linear")` will most likely give the best results. Also, you should use `nf4` as quant type in your quantization config when using 4bit quantization, i.e. `BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4")`.
|
| 132 |
+
|
| 133 |
+
#### A more convenient way
|
| 134 |
+
|
| 135 |
+
An easier but more limited way to apply LoftQ initialization is to use the convenience function `replace_lora_weights_loftq`. This takes the quantized PEFT model as input and replaces the LoRA weights in-place with their LoftQ-initialized counterparts.
|
| 136 |
+
|
| 137 |
+
```python
|
| 138 |
+
from peft import replace_lora_weights_loftq
|
| 139 |
+
from transformers import BitsAndBytesConfig
|
| 140 |
+
|
| 141 |
+
bnb_config = BitsAndBytesConfig(load_in_4bit=True, ...)
|
| 142 |
+
base_model = AutoModelForCausalLM.from_pretrained(..., quantization_config=bnb_config)
|
| 143 |
+
# note: don't pass init_lora_weights="loftq" or loftq_config!
|
| 144 |
+
lora_config = LoraConfig(task_type="CAUSAL_LM")
|
| 145 |
+
peft_model = get_peft_model(base_model, lora_config)
|
| 146 |
+
replace_lora_weights_loftq(peft_model)
|
| 147 |
+
```
|
| 148 |
+
|
| 149 |
+
`replace_lora_weights_loftq` also allows you to pass a `callback` argument to give you more control over which layers should be modified or not, which empirically can improve the results quite a lot. To see a more elaborate example of this, check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/loftq_finetuning/LoftQ_weight_replacement.ipynb).
|
| 150 |
+
|
| 151 |
+
`replace_lora_weights_loftq` implements only one iteration step of LoftQ. This means that only the LoRA weights are updated, instead of iteratively updating LoRA weights and quantized base model weights. This may lead to lower performance but has the advantage that we can use the original quantized weights derived from the base model, instead of having to keep an extra copy of modified quantized weights. Whether this tradeoff is worthwhile depends on the use case.
|
| 152 |
+
|
| 153 |
+
At the moment, `replace_lora_weights_loftq` has these additional limitations:
|
| 154 |
+
|
| 155 |
+
- Model files must be stored as a `safetensors` file.
|
| 156 |
+
- Only bitsandbytes 4bit quantization is supported.
|
| 157 |
+
|
| 158 |
+
> [!TIP]
|
| 159 |
+
> Learn more about how PEFT works with quantization in the [Quantization](quantization) guide.
|
| 160 |
+
|
| 161 |
+
### Rank-stabilized LoRA
|
| 162 |
+
|
| 163 |
+
Another way to initialize [`LoraConfig`] is with the [rank-stabilized LoRA (rsLoRA)](https://huggingface.co/papers/2312.03732) method. The LoRA architecture scales each adapter during every forward pass by a fixed scalar which is set at initialization and depends on the rank `r`. The scalar is given by `lora_alpha/r` in the original implementation, but rsLoRA uses `lora_alpha/math.sqrt(r)` which stabilizes the adapters and increases the performance potential from using a higher `r`.
|
| 164 |
+
|
| 165 |
+
```py
|
| 166 |
+
from peft import LoraConfig
|
| 167 |
+
|
| 168 |
+
config = LoraConfig(use_rslora=True, ...)
|
| 169 |
+
```
|
| 170 |
+
### Activated LoRA (aLoRA)
|
| 171 |
+
|
| 172 |
+
Activated LoRA (aLoRA) is a low rank adapter architecture for Causal LMs that allows for reusing existing base model KV cache for more efficient inference. This approach is best suited for inference pipelines which rely on the base model for most tasks/generations, but use aLoRA adapter(s) to perform specialized task(s) within the chain. For example, checking or correcting generated outputs of the base model. In these settings, inference times can be sped up by an order of magnitude or more. For more information on aLoRA and many example use cases, see https://huggingface.co/papers/2504.12397.
|
| 173 |
+
|
| 174 |
+
This technique scans for the last occurence of an invocation sequence (`alora_invocation_tokens`) in each input (this can be as short as 1 token), and activates the adapter weights on tokens starting with the beginning of the invocation sequence (any inputs after the invocation sequence are also adapted, and all generated tokens will use the adapted weights). Weights on prior tokens are left un-adapted -- making the cache for those tokens interchangeable with base model cache due to the causal attention mask in Causal LMs. Usage is very similar to standard LoRA, with the key difference that this invocation sequence must be specified when the adapter is created:
|
| 175 |
+
|
| 176 |
+
```py
|
| 177 |
+
from peft import LoraConfig
|
| 178 |
+
|
| 179 |
+
config = LoraConfig(alora_invocation_tokens=alora_invocation_tokens, task_type="CAUSAL_LM", ...)
|
| 180 |
+
```
|
| 181 |
+
|
| 182 |
+
where `alora_invocation_tokens` is a list of integer token ids. Given a desired invocation string, this can be obtained as
|
| 183 |
+
```
|
| 184 |
+
invocation_string = "placeholder"
|
| 185 |
+
alora_invocation_tokens = tokenizer.encode(invocation_string, add_special_tokens=False).
|
| 186 |
+
```
|
| 187 |
+
where the tokenizer is the tokenizer for the base model. Note that we have `add_special_tokens=False` to avoid adding SOS/EOS tokens in our search string (which will most likely cause failure to find).
|
| 188 |
+
|
| 189 |
+
**Notes**
|
| 190 |
+
* aLoRA is only supported for `task_type=CAUSAL_LM` tasks due to its focus on cache reuse.
|
| 191 |
+
* Since the weights are adapted on fewer tokens, often (not always) aLoRA requires higher rank (`r`) than LoRA. `r=32` can be a good starting point.
|
| 192 |
+
* aLoRA weights cannot be merged into the base model by definition, since the adapter weights are selectively applied to a subset of tokens. Attempts to merge will throw errors.
|
| 193 |
+
* Beam search is not yet supported.
|
| 194 |
+
* It is generally not recommended to add new tokens to the tokenizer that are not present in the base model, as this can complicate the target use case of both the base model and adapter model operating on overlapping context. That said, there is a possible workaround by first efficiently adding [trainable tokens](https://huggingface.co/docs/peft/en/package_reference/trainable_tokens) to the base model prior to training the adapter.
|
| 195 |
+
|
| 196 |
+
#### Choice of invocation sequence and SFT design
|
| 197 |
+
|
| 198 |
+
Each input must have the `alora_invocation_tokens` sequence present, it is not added automatically. To maximize model performance without compromising cache reuse, it is recommended to have the adapter weights activated early, i.e. at the start of any adapter-specific prompting, but after any long inputs such as prior generations or documents. As with any model,
|
| 199 |
+
formatting should be consistent between train and test.
|
| 200 |
+
|
| 201 |
+
Consider the following example, where the base model has a chat template,
|
| 202 |
+
and the goal is to train the adapter to generate a desired output.
|
| 203 |
+
|
| 204 |
+
* Option 1: If there is no task-specific prompt, i.e. the input is a chat history with the `assistant` prompt, then the chat template's `assistant` prompt (e.g. `<|start_of_role|>assistant<|end_of_role|>`) is a natural choice for the invocation string. See the model's chat template to find the prompt for the model.
|
| 205 |
+
* Option 2: If there is a task-specific prompt for the adapter that describes the task the adapter is learning, and that prompt is put as a `user` turn immediately prior to the generation, then the chat template's `user` prompt (e.g. `<|start_of_role|>user<|end_of_role|>`) is a natural choice for the invocation string.
|
| 206 |
+
|
| 207 |
+
Once deciding on an invocation string, get the model tokenizer and obtain `alora_invocation_tokens` as
|
| 208 |
+
```
|
| 209 |
+
alora_invocation_tokens = tokenizer.encode(invocation_string, add_special_tokens=False).
|
| 210 |
+
```
|
| 211 |
+
|
| 212 |
+
An example inference setup is at [alora finetuning](https://github.com/huggingface/peft/blob/main/examples/alora_finetuning/alora_finetuning.py).
|
| 213 |
+
|
| 214 |
+
**Note** If using custom strings for the invocation string, make sure that the start and end of the string are special tokens to avoid issues with tokenization at the boundaries.
|
| 215 |
+
|
| 216 |
+
To see why, imagine that 'a', 'b', 'c', and 'ab' are tokens in your tokenizer (numbers 1, 2, 3, 4 respectively). Suppose that your alora_invocation_tokens = [2, 3]. Now imagine your input string is "abc". Because "ab" is a token, this will get tokenized as [4,3]. So the alora_invocation_tokens will fail to be found, despite the string "bc" being in it. If the start and end of the invocation string are special tokens, however, this failure case will never happen since special tokens are never tokenized into the same token with other characters.
|
| 217 |
+
|
| 218 |
+
#### Using (and reusing) cache for generation
|
| 219 |
+
The main purpose of Activated LoRA is to make KV cache interchangeable between the base model and aLoRA adapter models **prior to the invocation sequence** since base and adapted KV values are not compatible. Specifically, keys and values stored during one model generation can be used in subsequent generations to avoid expensive prefill operations for context tokens. When sharing cache between the base model and aLoRA adapters, there are 2 main patterns:
|
| 220 |
+
1. The base model has generated something, and an aLoRA adapter is then called to do a followup generation. Example: the base model answers a question, and an aLoRA trained to detect hallucinations checks the base model response.
|
| 221 |
+
2. An aLoRA adapter has generated something, and the base model or a different aLoRA adapter is called to do a followup generation where there is partial context overlap with the original aLoRA. Example: The user provides a query, and an aLoRA rewrites the query to be more self-contained and improve retrieval in a RAG system. Then, documents are retrieved and loaded into context, an aLoRA checks if these documents are indeed relevant to the question, and then the base model generates an answer.
|
| 222 |
+
|
| 223 |
+
|
| 224 |
+
To demonstrate the above behaviors when using caching, we're using [DynamicCache](https://huggingface.co/docs/transformers/en/kv_cache) from `transformers`. Care must be taken to ensure that adapted cache values are not mixed with base cache values. In particular, an extra step is required for sharing the cache when there is partial context overlap (pattern 2).
|
| 225 |
+
|
| 226 |
+
**Pattern 1: Base model followed by aLoRA** Here, the entire input and generation from the base model is input into the aLoRA adapter, along with the invocation sequence:
|
| 227 |
+
```
|
| 228 |
+
from transformers import DynamicCache
|
| 229 |
+
...
|
| 230 |
+
cache = DynamicCache()
|
| 231 |
+
inputs_base = tokenizer(prompt_base, return_tensors="pt")
|
| 232 |
+
# Generate from base model and save cache
|
| 233 |
+
with model_alora.disable_adapter():
|
| 234 |
+
output = model_alora.generate(inputs_base["input_ids"].to(device),attention_mask=inputs_base["attention_mask"].to(device),past_key_values = cache,return_dict_in_generate=True)
|
| 235 |
+
output_text_base = tokenizer.decode(output.sequences[0])
|
| 236 |
+
cache = output.past_key_values
|
| 237 |
+
|
| 238 |
+
# Generate with aLoRA adapter from cache
|
| 239 |
+
prompt_alora = output_text + INVOCATION_STRING
|
| 240 |
+
inputs_alora = tokenizer(prompt_alora, return_tensors="pt").to(device)
|
| 241 |
+
output = model_alora.generate(**inputs_alora, past_key_values=cache)
|
| 242 |
+
output_text_alora = tokenizer.decode(output[0])
|
| 243 |
+
|
| 244 |
+
# Note: cache is now tainted with adapter values and cannot be used in base model from here on!
|
| 245 |
+
```
|
| 246 |
+
|
| 247 |
+
**Pattern 2: aLoRA generation followed by base model (or another aLoRA) with partial context overlap** Here, we prefill the shared context using the base model, and then generate.
|
| 248 |
+
|
| 249 |
+
```
|
| 250 |
+
from transformers import DynamicCache
|
| 251 |
+
import copy
|
| 252 |
+
...
|
| 253 |
+
cache = DynamicCache()
|
| 254 |
+
inputs_shared = tokenizer(prompt_shared, return_tensors="pt").to(device)
|
| 255 |
+
|
| 256 |
+
# Prefill from base model and save cache
|
| 257 |
+
with model_alora.disable_adapter():
|
| 258 |
+
with torch.no_grad():
|
| 259 |
+
model_alora(**inputs_shared, past_key_values=cache)
|
| 260 |
+
cache_copy = copy.deepcopy(cache)
|
| 261 |
+
|
| 262 |
+
# Generate from aLoRA using prefilled cache
|
| 263 |
+
prompt_alora = prompt_shared + INVOCATION_STRING
|
| 264 |
+
inputs_alora = tokenizer(prompt_alora, return_tensors="pt").to(device)
|
| 265 |
+
output = model_alora.generate(**inputs_alora, past_key_values=cache)
|
| 266 |
+
output_text_alora = tokenizer.decode(output[0])
|
| 267 |
+
|
| 268 |
+
# Generate from base model using saved cache not tainted by aLoRA KV values
|
| 269 |
+
prompt_base = prompt_shared
|
| 270 |
+
inputs_base = tokenizer(prompt_base, return_tensors="pt").to(device)
|
| 271 |
+
with model_alora.disable_adapter():
|
| 272 |
+
output = model_alora.generate(**inputs_base, past_key_values=cache_copy)
|
| 273 |
+
output_text_base = tokenizer.decode(output[0])
|
| 274 |
+
```
|
| 275 |
+
|
| 276 |
+
### Weight-Decomposed Low-Rank Adaptation (DoRA)
|
| 277 |
+
|
| 278 |
+
This technique decomposes the updates of the weights into two parts, magnitude and direction. Direction is handled by normal LoRA, whereas the magnitude is handled by a separate learnable parameter. This can improve the performance of LoRA, especially at low ranks. For more information on DoRA, see https://huggingface.co/papers/2402.09353.
|
| 279 |
+
|
| 280 |
+
```py
|
| 281 |
+
from peft import LoraConfig
|
| 282 |
+
|
| 283 |
+
config = LoraConfig(use_dora=True, ...)
|
| 284 |
+
```
|
| 285 |
+
|
| 286 |
+
If parts of the model or the DoRA adapter are offloaded to CPU you can get a significant speedup at the cost of some temporary (ephemeral) VRAM overhead by using `ephemeral_gpu_offload=True` in `config.runtime_config`.
|
| 287 |
+
|
| 288 |
+
```py
|
| 289 |
+
from peft import LoraConfig, LoraRuntimeConfig
|
| 290 |
+
|
| 291 |
+
config = LoraConfig(use_dora=True, runtime_config=LoraRuntimeConfig(ephemeral_gpu_offload=True), ...)
|
| 292 |
+
```
|
| 293 |
+
|
| 294 |
+
A `PeftModel` with a DoRA adapter can also be loaded with `ephemeral_gpu_offload=True` flag using the `from_pretrained` method as well as the `load_adapter` method.
|
| 295 |
+
|
| 296 |
+
```py
|
| 297 |
+
from peft import PeftModel
|
| 298 |
+
|
| 299 |
+
model = PeftModel.from_pretrained(base_model, peft_model_id, ephemeral_gpu_offload=True)
|
| 300 |
+
```
|
| 301 |
+
|
| 302 |
+
DoRA is optimized (computes faster and takes less memory) for models in the evaluation mode, or when dropout is set to 0. We reuse the
|
| 303 |
+
base result at those times to get the speedup.
|
| 304 |
+
Running [dora finetuning](https://github.com/huggingface/peft/blob/main/examples/dora_finetuning/dora_finetuning.py)
|
| 305 |
+
with `CUDA_VISIBLE_DEVICES=0 ZE_AFFINITY_MASK=0 time python examples/dora_finetuning/dora_finetuning.py --quantize --lora_dropout 0 --batch_size 16 --eval_step 2 --use_dora`
|
| 306 |
+
on a 4090 with gradient accumulation set to 2 and max step to 20 resulted with the following observations:
|
| 307 |
+
|
| 308 |
+
| | Without Optimization | With Optimization |
|
| 309 |
+
| :--: | :--: | :--: |
|
| 310 |
+
| train_runtime | 359.7298 | **279.2676** |
|
| 311 |
+
| train_samples_per_second | 1.779 | **2.292** |
|
| 312 |
+
| train_steps_per_second | 0.056 | **0.072** |
|
| 313 |
+
|
| 314 |
+
#### Caveats
|
| 315 |
+
|
| 316 |
+
- DoRA only supports embedding, linear, and Conv2d layers at the moment.
|
| 317 |
+
- DoRA introduces a bigger overhead than pure LoRA, so it is recommended to merge weights for inference, see [`LoraModel.merge_and_unload`].
|
| 318 |
+
- DoRA should work with weights quantized with bitsandbytes ("QDoRA"). However, issues have been reported when using QDoRA with DeepSpeed Zero2.
|
| 319 |
+
|
| 320 |
+
### QLoRA-style training
|
| 321 |
+
|
| 322 |
+
The default LoRA settings in PEFT add trainable weights to the query and value layers of each attention block. But [QLoRA](https://hf.co/papers/2305.14314), which adds trainable weights to all the linear layers of a transformer model, can provide performance equal to a fully finetuned model. To apply LoRA to all the linear layers, like in QLoRA, set `target_modules="all-linear"` (easier than specifying individual modules by name which can vary depending on the architecture).
|
| 323 |
+
|
| 324 |
+
```py
|
| 325 |
+
config = LoraConfig(target_modules="all-linear", ...)
|
| 326 |
+
```
|
| 327 |
+
|
| 328 |
+
### Memory efficient Layer Replication with LoRA
|
| 329 |
+
|
| 330 |
+
An approach used to improve the performance of models is to expand a model by duplicating layers in the model to build a larger model from a pretrained model of a given size. For example increasing a 7B model to a 10B model as described in the [SOLAR](https://huggingface.co/papers/2312.15166) paper. PEFT LoRA supports this kind of expansion in a memory efficient manner that supports further fine-tuning using LoRA adapters attached to the layers post replication of the layers. The replicated layers do not take additional memory as they share the underlying weights so the only additional memory required is the memory for the adapter weights. To use this feature you would create a config with the `layer_replication` argument.
|
| 331 |
+
|
| 332 |
+
```py
|
| 333 |
+
config = LoraConfig(layer_replication=[[0,4], [2,5]], ...)
|
| 334 |
+
```
|
| 335 |
+
|
| 336 |
+
Assuming the original model had 5 layers `[0, 1, 2 ,3, 4]`, this would create a model with 7 layers arranged as `[0, 1, 2, 3, 2, 3, 4]`. This follows the [mergekit](https://github.com/arcee-ai/mergekit) pass through merge convention where sequences of layers specified as start inclusive and end exclusive tuples are stacked to build the final model. Each layer in the final model gets its own distinct set of LoRA adapters.
|
| 337 |
+
|
| 338 |
+
[Fewshot-Metamath-OrcaVicuna-Mistral-10B](https://huggingface.co/abacusai/Fewshot-Metamath-OrcaVicuna-Mistral-10B) is an example of a model trained using this method on Mistral-7B expanded to 10B. The
|
| 339 |
+
[adapter_config.json](https://huggingface.co/abacusai/Fewshot-Metamath-OrcaVicuna-Mistral-10B/blob/main/adapter_config.json) shows a sample LoRA adapter config applying this method for fine-tuning.
|
| 340 |
+
|
| 341 |
+
### Fine grained control over ranks and alpha (scaling)
|
| 342 |
+
|
| 343 |
+
By default, all layers targeted with LoRA will have the same rank `r` and the same `lora_alpha` (which determines the LoRA scaling), depending on what was specified in the [`LoraConfig`]. In some cases, however, you may want to indicate different values for different layers. This is possible by passing the `rank_pattern` and `alpha_pattern` arguments to [`LoraConfig`]. These arguments should be dictionaries with the key being the layer name and the value being the rank/alpha value. The keys can be [regular expressions](https://docs.python.org/3/library/re.html) (regex). All LoRA layers that are not explicitly mentioned in `rank_pattern` and `alpha_pattern` will take the default `r` and `lora_alpha` values.
|
| 344 |
+
|
| 345 |
+
To give an example, let's assume that we have a model with the following structure:
|
| 346 |
+
|
| 347 |
+
```python
|
| 348 |
+
>>> print(model)
|
| 349 |
+
Outer(
|
| 350 |
+
(foo): Linear(...)
|
| 351 |
+
(module): Middle(
|
| 352 |
+
(foo): Linear(...)
|
| 353 |
+
(foobar): Linear(...)
|
| 354 |
+
(module): Inner(
|
| 355 |
+
(foo): Linear(...)
|
| 356 |
+
(barfoo): Linear(...)
|
| 357 |
+
)
|
| 358 |
+
)
|
| 359 |
+
)
|
| 360 |
+
```
|
| 361 |
+
|
| 362 |
+
- `rank_pattern={"foo": 42}` will match all 3 `foo` layers. Neither `foobar` nor `barfoo` are matched.
|
| 363 |
+
- `rank_pattern={"^foo": 42}` will only match the `foo` layer of the model, but neither `module.foo` nor `module.module.foo`. This is because the `^` means "start of string" when using regular expressions, and only `foo` starts with `"foo"`, the other layer names have prefixes.
|
| 364 |
+
- `rank_pattern={"^module.foo": 42}` matches only `module.foo`, but not `module.module.foo`, for the same reason.
|
| 365 |
+
- `rank_pattern={"module.foo": 42}` matches both `module.foo` and `module.module.foo`, but not `foo`.
|
| 366 |
+
- `rank_pattern={"^foo": 42, "^module.module.foo": 55}` matches `foo` and `module.module.foo`, respectively, but not `module.foo`.
|
| 367 |
+
- There is no need to indicate `$` to mark the end of the match, as this is added automatically by PEFT.
|
| 368 |
+
|
| 369 |
+
The same logic applies to `alpha_pattern`. If you're in doubt, don't try to get fancy with regular expressions -- just pass the full name for each module with a different rank/alpha, preceded by the `^` prefix, and you should be good.
|
| 370 |
+
|
| 371 |
+
### Targeting `nn.Parameter` directly
|
| 372 |
+
|
| 373 |
+
> [!WARNING]
|
| 374 |
+
> This feature is experimental and subject to change.
|
| 375 |
+
|
| 376 |
+
Generally, you should use `target_modules` to target the module (e.g. `nn.Linear`). However, in some circumstances, this is not possible. E.g., in many mixture of expert (MoE) layers in HF Transformers, instead of using `nn.Linear`, an `nn.Parameter` is used. PEFT normally overwrites the `forward` method for LoRA, but for `nn.Parameter`, there is none. Therefore, to apply LoRA to that parameter, it needs to be targeted with `target_parameters`. As an example, for [Llama4](https://huggingface.co/collections/meta-llama/llama-4-67f0c30d9fe03840bc9d0164), you can pass: `target_parameters=['feed_forward.experts.gate_up_proj', 'feed_forward.experts.down_proj]`.
|
| 377 |
+
|
| 378 |
+
#### Caveats
|
| 379 |
+
|
| 380 |
+
- At the moment, this argument allows to target 2-dim or 3-dim `nn.Parameter`s. It is assumed that in the case of a 3-dim parameter, the 0th dimension is the expert dimension.
|
| 381 |
+
- It is currently not possible to add multiple LoRA adapters (via `model.add_adapter` or `model.load_adapter`) that use `target_parameters` at the same time.
|
| 382 |
+
|
| 383 |
+
## Optimizers
|
| 384 |
+
|
| 385 |
+
LoRA training can optionally include special purpose optimizers. Currently PEFT supports LoRA-FA and LoRA+.
|
| 386 |
+
|
| 387 |
+
### LoRA-FA Optimizer
|
| 388 |
+
|
| 389 |
+
LoRA training can be more effective and efficient using LoRA-FA, as described in [LoRA-FA](https://huggingface.co/papers/2308.03303). LoRA-FA reduces activation memory consumption by fixing the matrix A and only tuning the matrix B. During training, the gradient of B is optimized to approximate the full parameter fine-tuning gradient. Moreover, the memory consumption of LoRA-FA is not sensitive to the rank (since it erases the activation of $A$), therefore it can improve performance by enlarging lora rank without increasing memory consumption.
|
| 390 |
+
|
| 391 |
+
```py
|
| 392 |
+
from peft import LoraConfig, get_peft_model
|
| 393 |
+
from peft.optimizers import create_lorafa_optimizer
|
| 394 |
+
from transformers import Trainer, get_cosine_schedule_with_warmup
|
| 395 |
+
|
| 396 |
+
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct")
|
| 397 |
+
|
| 398 |
+
config = LoraConfig(...)
|
| 399 |
+
model = get_peft_model(base_model, config)
|
| 400 |
+
|
| 401 |
+
optimizer = create_lorafa_optimizer(
|
| 402 |
+
model=model,
|
| 403 |
+
r=128,
|
| 404 |
+
lora_alpha=32,
|
| 405 |
+
lr=7e-5,
|
| 406 |
+
)
|
| 407 |
+
|
| 408 |
+
scheduler = get_cosine_schedule_with_warmup(
|
| 409 |
+
optimizer,
|
| 410 |
+
num_warmup_steps=100,
|
| 411 |
+
num_training_steps=1000,
|
| 412 |
+
)
|
| 413 |
+
|
| 414 |
+
trainer = Trainer(
|
| 415 |
+
...,
|
| 416 |
+
optimizers=(optimizer, scheduler),
|
| 417 |
+
)
|
| 418 |
+
```
|
| 419 |
+
|
| 420 |
+
### LoRA+ optimized LoRA
|
| 421 |
+
|
| 422 |
+
LoRA training can be optimized using [LoRA+](https://huggingface.co/papers/2402.12354), which uses different learning rates for the adapter matrices A and B, shown to increase finetuning speed by up to 2x and performance by 1-2%.
|
| 423 |
+
|
| 424 |
+
```py
|
| 425 |
+
from peft import LoraConfig, get_peft_model
|
| 426 |
+
from peft.optimizers import create_loraplus_optimizer
|
| 427 |
+
from transformers import Trainer
|
| 428 |
+
import bitsandbytes as bnb
|
| 429 |
+
|
| 430 |
+
base_model = ...
|
| 431 |
+
config = LoraConfig(...)
|
| 432 |
+
model = get_peft_model(base_model, config)
|
| 433 |
+
|
| 434 |
+
optimizer = create_loraplus_optimizer(
|
| 435 |
+
model=model,
|
| 436 |
+
optimizer_cls=bnb.optim.Adam8bit,
|
| 437 |
+
lr=5e-5,
|
| 438 |
+
loraplus_lr_ratio=16,
|
| 439 |
+
)
|
| 440 |
+
scheduler = None
|
| 441 |
+
|
| 442 |
+
...
|
| 443 |
+
trainer = Trainer(
|
| 444 |
+
...,
|
| 445 |
+
optimizers=(optimizer, scheduler),
|
| 446 |
+
)
|
| 447 |
+
```
|
| 448 |
+
|
| 449 |
+
## Efficiently train tokens alongside LoRA
|
| 450 |
+
|
| 451 |
+
Sometimes it is necessary to not only change some layer's weights but to add new tokens as well. With larger models this can be a memory-costly endeavour. PEFT LoRA adapters support the `trainable_token_indices` parameter which allows tuning of other tokens alongside fine-tuning of specific layers with LoRA. This method only trains the tokens you specify and leaves all other tokens untouched. This saves memory and doesn't throw away learned context of existing token embeddings in contrast to when training the whole embedding matrix. Under the hood this method uses the layer of [`TrainableTokensModel`].
|
| 452 |
+
|
| 453 |
+
```py
|
| 454 |
+
# for layer 'embed_tokens'
|
| 455 |
+
config = LoraConfig(trainable_token_indices=[idx_1, idx_2, ...], ...)
|
| 456 |
+
|
| 457 |
+
# specific embedding layer
|
| 458 |
+
config = LoraConfig(trainable_token_indices={'emb_tokens': [idx_1, idx_2, ...]}, ...)
|
| 459 |
+
```
|
| 460 |
+
|
| 461 |
+
In the snippet below we show how to add new tokens to the model and how to train it alongside the other layers in the model.
|
| 462 |
+
|
| 463 |
+
```py
|
| 464 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 465 |
+
from peft import get_peft_model, LoraConfig
|
| 466 |
+
|
| 467 |
+
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
|
| 468 |
+
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
|
| 469 |
+
|
| 470 |
+
# we define our new tokens and add them to the tokenizer as special tokens
|
| 471 |
+
special_tokens = ['<|start_think|>', '<|stop_think|>']
|
| 472 |
+
tokenizer.add_special_tokens({'additional_special_tokens': special_tokens})
|
| 473 |
+
|
| 474 |
+
# make room for new tokens in the embedding matrix if it isn't big enough already
|
| 475 |
+
base_model.resize_token_embeddings(max(len(tokenizer), base_model.model.embed_tokens.num_embeddings))
|
| 476 |
+
|
| 477 |
+
# typical LoRA config with `trainable_token_indices` targeting embedding layer `embed_tokens`
|
| 478 |
+
# and specifically our new tokens we just added
|
| 479 |
+
lora_config = LoraConfig(
|
| 480 |
+
target_modules='all-linear',
|
| 481 |
+
trainable_token_indices={'embed_tokens': tokenizer.convert_tokens_to_ids(special_tokens)},
|
| 482 |
+
)
|
| 483 |
+
peft_model = get_peft_model(base_model, lora_config)
|
| 484 |
+
|
| 485 |
+
# proceed to train the model like normal
|
| 486 |
+
[...]
|
| 487 |
+
```
|
| 488 |
+
|
| 489 |
+
The token weights are part of your adapter state dict and saved alongside the LoRA weights.
|
| 490 |
+
If we would have used full fine-tuning with `modules_to_save=['embed_tokens']` we would have stored the full embedding matrix in the checkpoint, leading to a much bigger file.
|
| 491 |
+
|
| 492 |
+
To give a bit of an indication how much VRAM can be saved, a rudimentary comparison of the above example was made between training the embedding matrix fully (`modules_to_save=["embed_tokens"]`), using a LoRA for the embedding matrix (`target_modules=[..., "embed_tokens"]`, rank 32) and trainable tokens (`trainable_token_indices=[...]`, 6 tokens). Trainable tokens used about as much VRAM (15,562MB vs. 15,581MB) as LoRA while being specific to the tokens and saved ~1GB of VRAM over fully training the embedding matrix.
|
| 493 |
+
|
| 494 |
+
|
| 495 |
+
## Merge LoRA weights into the base model
|
| 496 |
+
|
| 497 |
+
While LoRA is significantly smaller and faster to train, you may encounter latency issues during inference due to separately loading the base model and the LoRA adapter. To eliminate latency, use the [`~LoraModel.merge_and_unload`] function to merge the adapter weights with the base model. This allows you to use the newly merged model as a standalone model. The [`~LoraModel.merge_and_unload`] function doesn't keep the adapter weights in memory.
|
| 498 |
+
|
| 499 |
+
Below is a diagram that explains the intuition of LoRA adapter merging:
|
| 500 |
+
|
| 501 |
+
<div class="flex justify-center">
|
| 502 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png"/>
|
| 503 |
+
</div>
|
| 504 |
+
|
| 505 |
+
We show in the snippets below how to run that using PEFT.
|
| 506 |
+
|
| 507 |
+
```py
|
| 508 |
+
from transformers import AutoModelForCausalLM
|
| 509 |
+
from peft import PeftModel
|
| 510 |
+
|
| 511 |
+
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
|
| 512 |
+
peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
|
| 513 |
+
model = PeftModel.from_pretrained(base_model, peft_model_id)
|
| 514 |
+
model.merge_and_unload()
|
| 515 |
+
```
|
| 516 |
+
|
| 517 |
+
If you need to keep a copy of the weights so you can unmerge the adapter later or delete and load different ones, you should use the [`~LoraModel.merge_adapter`] function instead. Now you have the option to use [`~LoraModel.unmerge_adapter`] to return the base model.
|
| 518 |
+
|
| 519 |
+
```py
|
| 520 |
+
from transformers import AutoModelForCausalLM
|
| 521 |
+
from peft import PeftModel
|
| 522 |
+
|
| 523 |
+
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
|
| 524 |
+
peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
|
| 525 |
+
model = PeftModel.from_pretrained(base_model, peft_model_id)
|
| 526 |
+
model.merge_adapter()
|
| 527 |
+
|
| 528 |
+
# unmerge the LoRA layers from the base model
|
| 529 |
+
model.unmerge_adapter()
|
| 530 |
+
```
|
| 531 |
+
|
| 532 |
+
The [`~LoraModel.add_weighted_adapter`] function is useful for merging multiple LoRAs into a new adapter based on a user provided weighting scheme in the `weights` parameter. Below is an end-to-end example.
|
| 533 |
+
|
| 534 |
+
First load the base model:
|
| 535 |
+
|
| 536 |
+
```python
|
| 537 |
+
from transformers import AutoModelForCausalLM
|
| 538 |
+
from peft import PeftModel
|
| 539 |
+
import torch
|
| 540 |
+
|
| 541 |
+
base_model = AutoModelForCausalLM.from_pretrained(
|
| 542 |
+
"mistralai/Mistral-7B-v0.1", torch_dtype=torch.float16, device_map="auto"
|
| 543 |
+
)
|
| 544 |
+
```
|
| 545 |
+
|
| 546 |
+
Then we load the first adapter:
|
| 547 |
+
|
| 548 |
+
```python
|
| 549 |
+
peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
|
| 550 |
+
model = PeftModel.from_pretrained(base_model, peft_model_id, adapter_name="sft")
|
| 551 |
+
```
|
| 552 |
+
|
| 553 |
+
Then load a different adapter and merge it with the first one:
|
| 554 |
+
|
| 555 |
+
```python
|
| 556 |
+
weighted_adapter_name = "sft-dpo"
|
| 557 |
+
model.load_adapter("alignment-handbook/zephyr-7b-dpo-lora", adapter_name="dpo")
|
| 558 |
+
model.add_weighted_adapter(
|
| 559 |
+
adapters=["sft", "dpo"],
|
| 560 |
+
weights=[0.7, 0.3],
|
| 561 |
+
adapter_name=weighted_adapter_name,
|
| 562 |
+
combination_type="linear"
|
| 563 |
+
)
|
| 564 |
+
model.set_adapter(weighted_adapter_name)
|
| 565 |
+
```
|
| 566 |
+
|
| 567 |
+
> [!TIP]
|
| 568 |
+
> There are several supported methods for `combination_type`. Refer to the [documentation](../package_reference/lora#peft.LoraModel.add_weighted_adapter) for more details. Note that "svd" as the `combination_type` is not supported when using `torch.float16` or `torch.bfloat16` as the datatype.
|
| 569 |
+
|
| 570 |
+
Now, perform inference:
|
| 571 |
+
|
| 572 |
+
```python
|
| 573 |
+
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
|
| 574 |
+
|
| 575 |
+
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
|
| 576 |
+
|
| 577 |
+
prompt = "Hey, are you conscious? Can you talk to me?"
|
| 578 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
| 579 |
+
inputs = {k: v.to(device) for k, v in inputs.items()}
|
| 580 |
+
|
| 581 |
+
with torch.no_grad():
|
| 582 |
+
generate_ids = model.generate(**inputs, max_length=30)
|
| 583 |
+
outputs = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
|
| 584 |
+
print(outputs)
|
| 585 |
+
```
|
| 586 |
+
|
| 587 |
+
## Load adapters
|
| 588 |
+
|
| 589 |
+
Adapters can be loaded onto a pretrained model with [`~PeftModel.load_adapter`], which is useful for trying out different adapters whose weights aren't merged. Set the active adapter weights with the [`~LoraModel.set_adapter`] function.
|
| 590 |
+
|
| 591 |
+
```py
|
| 592 |
+
from transformers import AutoModelForCausalLM
|
| 593 |
+
from peft import PeftModel
|
| 594 |
+
|
| 595 |
+
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
|
| 596 |
+
peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
|
| 597 |
+
model = PeftModel.from_pretrained(base_model, peft_model_id)
|
| 598 |
+
|
| 599 |
+
# load different adapter
|
| 600 |
+
model.load_adapter("alignment-handbook/zephyr-7b-dpo-lora", adapter_name="dpo")
|
| 601 |
+
|
| 602 |
+
# set adapter as active
|
| 603 |
+
model.set_adapter("dpo")
|
| 604 |
+
```
|
| 605 |
+
|
| 606 |
+
To return the base model, you could use [`~LoraModel.unload`] to unload all of the LoRA modules or [`~LoraModel.delete_adapter`] to delete the adapter entirely.
|
| 607 |
+
|
| 608 |
+
```py
|
| 609 |
+
# unload adapter
|
| 610 |
+
model.unload()
|
| 611 |
+
|
| 612 |
+
# delete adapter
|
| 613 |
+
model.delete_adapter("dpo")
|
| 614 |
+
```
|
| 615 |
+
|
| 616 |
+
## Inference with different LoRA adapters in the same batch
|
| 617 |
+
|
| 618 |
+
Normally, each inference batch has to use the same adapter(s) in PEFT. This can sometimes be annoying, because we may have batches that contain samples intended to be used with different LoRA adapters. For example, we could have a base model that works well in English and two more LoRA adapters, one for French and one for German. Usually, we would have to split our batches such that each batch only contains samples of one of the languages, we cannot combine different languages in the same batch.
|
| 619 |
+
|
| 620 |
+
Thankfully, it is possible to mix different LoRA adapters in the same batch using the `adapter_name` argument. Below, we show an example of how this works in practice. First, let's load the base model, English, and the two adapters, French and German, like this:
|
| 621 |
+
|
| 622 |
+
```python
|
| 623 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 624 |
+
from peft import PeftModel
|
| 625 |
+
|
| 626 |
+
model_id = ...
|
| 627 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 628 |
+
|
| 629 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
| 630 |
+
# load the LoRA adapter for French
|
| 631 |
+
peft_model = PeftModel.from_pretrained(model, <path>, adapter_name="adapter_fr")
|
| 632 |
+
# next, load the LoRA adapter for German
|
| 633 |
+
peft_model.load_adapter(<path>, adapter_name="adapter_de")
|
| 634 |
+
```
|
| 635 |
+
|
| 636 |
+
Now, we want to generate text on a sample that contains all three languages: The first three samples are in English, the next three are in French, and the last three are in German. We can use the `adapter_names` argument to specify which adapter to use for each sample. Since our base model is used for English, we use the special string `"__base__"` for these samples. For the next three samples, we indicate the adapter name of the French LoRA fine-tune, in this case `"adapter_fr"`. For the last three samples, we indicate the adapter name of the German LoRA fine-tune, in this case `"adapter_de"`. This way, we can use the base model and the two adapters in a single batch.
|
| 637 |
+
|
| 638 |
+
```python
|
| 639 |
+
inputs = tokenizer(
|
| 640 |
+
[
|
| 641 |
+
"Hello, my dog is cute",
|
| 642 |
+
"Hello, my cat is awesome",
|
| 643 |
+
"Hello, my fish is great",
|
| 644 |
+
"Salut, mon chien est mignon",
|
| 645 |
+
"Salut, mon chat est génial",
|
| 646 |
+
"Salut, mon poisson est super",
|
| 647 |
+
"Hallo, mein Hund ist süß",
|
| 648 |
+
"Hallo, meine Katze ist toll",
|
| 649 |
+
"Hallo, mein Fisch ist großartig",
|
| 650 |
+
],
|
| 651 |
+
return_tensors="pt",
|
| 652 |
+
padding=True,
|
| 653 |
+
)
|
| 654 |
+
|
| 655 |
+
adapter_names = [
|
| 656 |
+
"__base__", "__base__", "__base__",
|
| 657 |
+
"adapter_fr", "adapter_fr", "adapter_fr",
|
| 658 |
+
"adapter_de", "adapter_de", "adapter_de",
|
| 659 |
+
]
|
| 660 |
+
output = peft_model.generate(**inputs, adapter_names=adapter_names, max_new_tokens=20)
|
| 661 |
+
```
|
| 662 |
+
|
| 663 |
+
Note that the order does not matter here, i.e. the samples in the batch don't need to be grouped by adapter as in the example above. We just need to ensure that the `adapter_names` argument is aligned correctly with the samples.
|
| 664 |
+
|
| 665 |
+
Additionally, the same approach also works with the `modules_to_save` feature, which allows for saving and reusing specific neural network layers, such as custom heads for classification tasks, across different LoRA adapters.
|
| 666 |
+
|
| 667 |
+
### Caveats
|
| 668 |
+
|
| 669 |
+
Using this feature has some drawbacks, namely:
|
| 670 |
+
|
| 671 |
+
- It only works for inference, not for training.
|
| 672 |
+
- Disabling adapters using the `with model.disable_adapter()` context takes precedence over `adapter_names`.
|
| 673 |
+
- You cannot pass `adapter_names` when some adapter weights were merged with base weight using the `merge_adapter` method. Please unmerge all adapters first by calling `model.unmerge_adapter()`.
|
| 674 |
+
- For obvious reasons, this cannot be used after calling `merge_and_unload()`, since all the LoRA adapters will be merged into the base weights in this case.
|
| 675 |
+
- This feature does not currently work with DoRA, so set `use_dora=False` in your `LoraConfig` if you want to use it.
|
| 676 |
+
- The `modules_to_save` feature is currently only supported for the layers of types `Linear`, `Embedding`, `Conv2d` and `Conv1d`.
|
| 677 |
+
- There is an expected overhead for inference with `adapter_names`, especially if the amount of different adapters in the batch is high. This is because the batch size is effectively reduced to the number of samples per adapter. If runtime performance is your top priority, try the following:
|
| 678 |
+
- Increase the batch size.
|
| 679 |
+
- Try to avoid having a large number of different adapters in the same batch, prefer homogeneous batches. This can be achieved by buffering samples with the same adapter and only perform inference with a small handful of different adapters.
|
| 680 |
+
- Take a look at alternative implementations such as [LoRAX](https://github.com/predibase/lorax), [punica](https://github.com/punica-ai/punica), or [S-LoRA](https://github.com/S-LoRA/S-LoRA), which are specialized to work with a large number of different adapters.
|
| 681 |
+
|
| 682 |
+
## Composing and Reusing LoRA Adapters
|
| 683 |
+
### Arrow
|
| 684 |
+
[Arrow](https://huggingface.co/papers/2405.11157) is a modular routing algorithm designed to combine multiple pre-trained task-specific LoRA adapters to solve a given task. Rather than merging all adapters naively, Arrow introduces a **gradient-free, token-wise mixture-of-experts (MoE) routing mechanism**. At inference time, it first computes a _prototype_ for each LoRA by extracting the top right singular vector from its SVD decomposition. Each token representation is then compared to these prototypes via cosine similarity to obtain routing coefficients. Tokens are assigned to the top-k most relevant LoRA adapters, with the coefficients normalized through softmax, and their outputs linearly combined. This allows effective reuse of existing LoRA modules for new tasks and leads to stronger zero-shot generalization.
|
| 685 |
+
|
| 686 |
+
In PEFT, Arrow is enabled through ```ArrowConfig``` and ```create_arrow_model```. You can also configure parameters such as ```top_k``` (the number of LoRA adapters combined per token), ```router_temperature``` (the softmax temperature applied to the routing coefficients), and ```rng_seed``` (for reproducibility).
|
| 687 |
+
|
| 688 |
+
```py
|
| 689 |
+
from peft import create_arrow_model, ArrowConfig
|
| 690 |
+
from transformers import AutoModelForCausalLM
|
| 691 |
+
|
| 692 |
+
# Loading the model
|
| 693 |
+
base_model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
|
| 694 |
+
|
| 695 |
+
# Creating the Arrow config
|
| 696 |
+
arrow_config = ArrowConfig(
|
| 697 |
+
top_k=3,
|
| 698 |
+
router_temperature=1.0,
|
| 699 |
+
rng_seed=42,
|
| 700 |
+
)
|
| 701 |
+
|
| 702 |
+
# The LoRA adapters below were trained on a clustered FLAN dataset.
|
| 703 |
+
# Task clustering was performed using the Model-Based Clustering (MBC) method,
|
| 704 |
+
# as described in the Arrow paper.
|
| 705 |
+
# While one could train a separate LoRA for each task and let Arrow route tokens among them,
|
| 706 |
+
# training LoRAs on clusters of tasks instead provides an indirect optimization for
|
| 707 |
+
# transfer across the multi-task dataset.
|
| 708 |
+
task_specific_adapter_paths = [
|
| 709 |
+
f"TahaBa/phi3-mini-clustered-flan/ts_expert_{i}" for i in range(10)
|
| 710 |
+
]
|
| 711 |
+
|
| 712 |
+
# Creating the Arrow model
|
| 713 |
+
model = create_arrow_model(
|
| 714 |
+
base_model=base_model,
|
| 715 |
+
task_specific_adapter_paths=task_specific_adapter_paths,
|
| 716 |
+
arrow_config=arrow_config,
|
| 717 |
+
)
|
| 718 |
+
|
| 719 |
+
# Now the forward path could be called on this model, like a normal PeftModel.
|
| 720 |
+
```
|
| 721 |
+
|
| 722 |
+
Furthermore, you can add or remove adapters after calling ```create_arrow_model```—for example, to fine-tune a new adapter or discard an unnecessary one. Once the adapters are in place, you can activate the ```"arrow_router"``` for inference to use Arrow. Note that if you add a new LoRA adapter after ```create_arrow_model``` and want to fine-tune it, you must explicitly set the new adapter as active, since ```"arrow_router"``` is activated by default in ```create_arrow_model```.
|
| 723 |
+
|
| 724 |
+
```py
|
| 725 |
+
from trl import SFTTrainer, SFTConfig
|
| 726 |
+
|
| 727 |
+
# Adding a new adapter and activating it
|
| 728 |
+
model.add_adapter(adapter_name='new_adapter')
|
| 729 |
+
model.set_adapter('new_adapter')
|
| 730 |
+
|
| 731 |
+
# Now the model could be trained along the `new_adapter`.
|
| 732 |
+
trainer = SFTTrainer(
|
| 733 |
+
model=model,
|
| 734 |
+
args=SFTConfig(...),
|
| 735 |
+
...
|
| 736 |
+
)
|
| 737 |
+
|
| 738 |
+
# Once the training is done, you can activate `arrow_router` and use it in inference
|
| 739 |
+
model.set_adapter('arrow_router') # Model is ready to be used at inference time now
|
| 740 |
+
```
|
| 741 |
+
|
| 742 |
+
### GenKnowSub
|
| 743 |
+
[GenKnowSub](https://aclanthology.org/2025.acl-short.54/) augments Arrow by purifying task-specific LoRA adapters before routing. The key idea is to subtract general knowledge encoded in LoRA space—based on the [forgetting-via-negation principle](https://huggingface.co/papers/2212.04089)—so that task adapters become more isolated and focused on task-relevant signals. Concretely, GenKnowSub estimates a low-dimensional “general” subspace from a set of general (non task-specific) LoRA adapters and removes this component from each task adapter’s LoRA update prior to Arrow’s token-wise routing. This typically improves compositionality and reduces interference when combining many task adapters.
|
| 744 |
+
|
| 745 |
+
In PEFT, enable GenKnowSub by setting ```use_gks=True``` in ArrowConfig, and providing ```general_adapter_paths``` in ```create_arrow_model```:
|
| 746 |
+
|
| 747 |
+
```py
|
| 748 |
+
from peft import create_arrow_model, ArrowConfig
|
| 749 |
+
from transformers import AutoModelForCausalLM
|
| 750 |
+
|
| 751 |
+
# Loading the model
|
| 752 |
+
base_model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
|
| 753 |
+
|
| 754 |
+
# Creating the Arrow config
|
| 755 |
+
arrow_config = ArrowConfig(
|
| 756 |
+
top_k=3,
|
| 757 |
+
router_temperature=1.0,
|
| 758 |
+
use_gks=True,
|
| 759 |
+
rng_seed=42,
|
| 760 |
+
)
|
| 761 |
+
|
| 762 |
+
# Path to task-specific, trained on flan clustered dataset (as we explained before.)
|
| 763 |
+
task_specific_adapter_paths = [
|
| 764 |
+
f"TahaBa/phi3-mini-clustered-flan/ts_expert_{i}" for i in range(10)
|
| 765 |
+
]
|
| 766 |
+
# These general adapters are trained on English, German, and French Wikipedia dataset,
|
| 767 |
+
# with causal language modelling objective, each pair like: (507 token tsentence, 5 token completion), and the loss computed on the completion
|
| 768 |
+
general_adapter_paths = [
|
| 769 |
+
"TahaBa/phi3-mini-general-adapters/cluster0_batch16_prop1.0_langen/checkpoint-17",
|
| 770 |
+
"TahaBa/phi3-mini-general-adapters/cluster0_batch16_prop1.0_langfr/checkpoint-35",
|
| 771 |
+
"TahaBa/phi3-mini-general-adapters/cluster0_batch16_prop1.0_langger/checkpoint-17"
|
| 772 |
+
]
|
| 773 |
+
|
| 774 |
+
# Creating the Arrow model
|
| 775 |
+
model = create_arrow_model(
|
| 776 |
+
base_model=base_model,
|
| 777 |
+
task_specific_adapter_paths=task_specific_adapter_paths,
|
| 778 |
+
general_adapter_paths=general_adapter_paths,
|
| 779 |
+
arrow_config=arrow_config,
|
| 780 |
+
)
|
| 781 |
+
|
| 782 |
+
# Now the forward path could be called on this model, like a normal PeftModel.
|
| 783 |
+
```
|
| 784 |
+
To encode general knowledge, GenKnowSub subtracts the average of the provided general adapters from each task-specific adapter once, before routing begins. Furthermore, the ability to add or remove adapters after calling ```create_arrow_model``` (as described in the Arrow section) is still supported in this case.
|
| 785 |
+
|
| 786 |
+
> [!TIP]
|
| 787 |
+
> **Things to keep in mind when using Arrow + GenKnowSub:**
|
| 788 |
+
>
|
| 789 |
+
> - All LoRA adapters (task-specific and general) must share the same ```rank``` and ```target_modules```.
|
| 790 |
+
>
|
| 791 |
+
> - Any inconsistency in these settings will raise an error in ```create_arrow_model```.
|
| 792 |
+
>
|
| 793 |
+
> - Having different scaling factors (```lora_alpha```) across task adapters is supported — Arrow handles them automatically.
|
| 794 |
+
>
|
| 795 |
+
> - Merging the ```"arrow_router"``` is not supported, due to its dynamic routing behavior.
|
| 796 |
+
>
|
| 797 |
+
> - In create_arrow_model, task adapters are loaded as ```task_i``` and general adapters as ```gks_j``` (where ```i``` and ```j``` are indices). The function ensures consistency of ```target_modules```, ```rank```, and whether adapters are applied to ```Linear``` or ```Linear4bit``` layers. It then adds the ```"arrow_router"``` module and activates it. Any customization of this process requires overriding ```create_arrow_model```.
|
| 798 |
+
>
|
| 799 |
+
> - This implementation is compatible with 4-bit quantization (via bitsandbytes):
|
| 800 |
+
>
|
| 801 |
+
> ```py
|
| 802 |
+
> from transformers import AutoModelForCausalLM, BitsAndBytesConfig
|
| 803 |
+
> import torch
|
| 804 |
+
>
|
| 805 |
+
> # Quantisation config
|
| 806 |
+
> bnb_config = BitsAndBytesConfig(
|
| 807 |
+
> load_in_4bit=True,
|
| 808 |
+
> bnb_4bit_quant_type="nf4",
|
| 809 |
+
> bnb_4bit_compute_dtype=torch.bfloat16,
|
| 810 |
+
> bnb_4bit_use_double_quant=False,
|
| 811 |
+
> )
|
| 812 |
+
>
|
| 813 |
+
> # Loading the model
|
| 814 |
+
> base_model = AutoModelForCausalLM.from_pretrained(
|
| 815 |
+
> "microsoft/Phi-3-mini-4k-instruct",
|
| 816 |
+
> torch_dtype=torch.bfloat16,
|
| 817 |
+
> device_map="auto",
|
| 818 |
+
> quantization_config=bnb_config,
|
| 819 |
+
> )
|
| 820 |
+
>
|
| 821 |
+
> # Now call create_arrow_model() as we explained before.
|
| 822 |
+
> ```
|
peft/docs/source/developer_guides/low_level_api.md
ADDED
|
@@ -0,0 +1,148 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
|
| 12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 13 |
+
rendered properly in your Markdown viewer.
|
| 14 |
+
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# Adapter injection
|
| 18 |
+
|
| 19 |
+
With PEFT, you can inject trainable adapters into any `torch` module which allows you to use adapter methods without relying on the modeling classes in PEFT. This works for all adapters except for those based on prompt learning (e.g. prefix tuning or p-tuning).
|
| 20 |
+
|
| 21 |
+
Check the table below to see when you should inject adapters.
|
| 22 |
+
|
| 23 |
+
| Pros | Cons |
|
| 24 |
+
|---|---|
|
| 25 |
+
| the model is modified inplace, keeping all the original attributes and methods | manually write the `from_pretrained` and `save_pretrained` utility functions from Hugging Face to save and load adapters |
|
| 26 |
+
| works for any `torch` module and modality | doesn't work with any of the utility methods provided by `PeftModel` such as disabling and merging adapters |
|
| 27 |
+
|
| 28 |
+
## Creating a new PEFT model
|
| 29 |
+
|
| 30 |
+
To perform the adapter injection, use the [`inject_adapter_in_model`] method. This method takes 3 arguments, the PEFT config, the model, and an optional adapter name. You can also attach multiple adapters to the model if you call [`inject_adapter_in_model`] multiple times with different adapter names.
|
| 31 |
+
|
| 32 |
+
For example, to inject LoRA adapters into the `linear` submodule of the `DummyModel` module:
|
| 33 |
+
|
| 34 |
+
```python
|
| 35 |
+
import torch
|
| 36 |
+
from peft import inject_adapter_in_model, LoraConfig
|
| 37 |
+
|
| 38 |
+
class DummyModel(torch.nn.Module):
|
| 39 |
+
def __init__(self):
|
| 40 |
+
super().__init__()
|
| 41 |
+
self.embedding = torch.nn.Embedding(10, 10)
|
| 42 |
+
self.linear = torch.nn.Linear(10, 10)
|
| 43 |
+
self.lm_head = torch.nn.Linear(10, 10)
|
| 44 |
+
|
| 45 |
+
def forward(self, input_ids):
|
| 46 |
+
x = self.embedding(input_ids)
|
| 47 |
+
x = self.linear(x)
|
| 48 |
+
x = self.lm_head(x)
|
| 49 |
+
return x
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
lora_config = LoraConfig(
|
| 53 |
+
lora_alpha=16,
|
| 54 |
+
lora_dropout=0.1,
|
| 55 |
+
r=64,
|
| 56 |
+
bias="none",
|
| 57 |
+
target_modules=["linear"],
|
| 58 |
+
)
|
| 59 |
+
|
| 60 |
+
model = DummyModel()
|
| 61 |
+
model = inject_adapter_in_model(lora_config, model)
|
| 62 |
+
|
| 63 |
+
dummy_inputs = torch.LongTensor([[0, 1, 2, 3, 4, 5, 6, 7]])
|
| 64 |
+
dummy_outputs = model(dummy_inputs)
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
Print the model to see that the adapters have been correctly injected.
|
| 68 |
+
|
| 69 |
+
```bash
|
| 70 |
+
DummyModel(
|
| 71 |
+
(embedding): Embedding(10, 10)
|
| 72 |
+
(linear): Linear(
|
| 73 |
+
in_features=10, out_features=10, bias=True
|
| 74 |
+
(lora_dropout): ModuleDict(
|
| 75 |
+
(default): Dropout(p=0.1, inplace=False)
|
| 76 |
+
)
|
| 77 |
+
(lora_A): ModuleDict(
|
| 78 |
+
(default): Linear(in_features=10, out_features=64, bias=False)
|
| 79 |
+
)
|
| 80 |
+
(lora_B): ModuleDict(
|
| 81 |
+
(default): Linear(in_features=64, out_features=10, bias=False)
|
| 82 |
+
)
|
| 83 |
+
(lora_embedding_A): ParameterDict()
|
| 84 |
+
(lora_embedding_B): ParameterDict()
|
| 85 |
+
)
|
| 86 |
+
(lm_head): Linear(in_features=10, out_features=10, bias=True)
|
| 87 |
+
)
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
### Injection based on a `state_dict`
|
| 91 |
+
|
| 92 |
+
Sometimes, it is possible that there is a PEFT adapter checkpoint but the corresponding PEFT config is not known for whatever reason. To inject the PEFT layers for this checkpoint, you would usually have to reverse-engineer the corresponding PEFT config, most notably the `target_modules` argument, based on the `state_dict` from the checkpoint. This can be cumbersome and error prone. To avoid this, it is also possible to call [`inject_adapter_in_model`] and pass the loaded `state_dict` as an argument:
|
| 93 |
+
|
| 94 |
+
```python
|
| 95 |
+
from safetensors.torch import load_file
|
| 96 |
+
|
| 97 |
+
model = ...
|
| 98 |
+
state_dict = load_file(<path-to-safetensors-file>)
|
| 99 |
+
lora_config = LoraConfig(...)
|
| 100 |
+
model = inject_adapter_in_model(lora_config, model, state_dict=state_dict)
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
In this case, PEFT will use the `state_dict` as reference for which layers to target instead of using the PEFT config. As a user, you don't have to set the exact `target_modules` of the PEFT config for this to work. However, you should still pass a PEFT config of the right type, in this example `LoraConfig`, you can leave the `target_modules` as `None`.
|
| 104 |
+
|
| 105 |
+
Be aware that this still only creates the uninitialized PEFT layers, the values from the `state_dict` are not used to populate the model weights. To populate the weights, proceed with calling [`set_peft_model_state_dict`] as described below.
|
| 106 |
+
|
| 107 |
+
⚠️ Note that if there is a mismatch between what is configured in the PEFT config and what is found in the `state_dict`, PEFT will warn you about this. You can ignore the warning if you know that the PEFT config is not correctly specified.
|
| 108 |
+
|
| 109 |
+
> [!WARNING]
|
| 110 |
+
> If the original PEFT adapters was using `target_parameters` instead of `target_modules`, injecting from a `state_dict` will not work correctly. In this case, it is mandatory to use the correct PEFT config for injection.
|
| 111 |
+
|
| 112 |
+
## Saving the model
|
| 113 |
+
|
| 114 |
+
To only save the adapter, use the [`get_peft_model_state_dict`] function:
|
| 115 |
+
|
| 116 |
+
```python
|
| 117 |
+
from peft import get_peft_model_state_dict
|
| 118 |
+
|
| 119 |
+
peft_state_dict = get_peft_model_state_dict(model)
|
| 120 |
+
print(peft_state_dict)
|
| 121 |
+
```
|
| 122 |
+
|
| 123 |
+
Otherwise, `model.state_dict()` returns the full state dict of the model.
|
| 124 |
+
|
| 125 |
+
## Loading the model
|
| 126 |
+
|
| 127 |
+
After loading the saved `state_dict`, it can be applied using the [`set_peft_model_state_dict`] function:
|
| 128 |
+
|
| 129 |
+
```python
|
| 130 |
+
from peft import set_peft_model_state_dict
|
| 131 |
+
|
| 132 |
+
model = DummyModel()
|
| 133 |
+
model = inject_adapter_in_model(lora_config, model)
|
| 134 |
+
outcome = set_peft_model_state_dict(model, peft_state_dict)
|
| 135 |
+
# check that there were no wrong keys
|
| 136 |
+
print(outcome.unexpected_keys)
|
| 137 |
+
```
|
| 138 |
+
|
| 139 |
+
If injecting the adapter is slow or you need to load a large number of adapters, you may use an optimization that allows to create an "empty" adapter on meta device and only fills the weights with real weights when the [`set_peft_model_state_dict`] is called. To do this, pass `low_cpu_mem_usage=True` to both [`inject_adapter_in_model`] and [`set_peft_model_state_dict`].
|
| 140 |
+
|
| 141 |
+
```python
|
| 142 |
+
model = DummyModel()
|
| 143 |
+
model = inject_adapter_in_model(lora_config, model, low_cpu_mem_usage=True)
|
| 144 |
+
|
| 145 |
+
print(model.linear.lora_A["default"].weight.device.type == "meta") # should be True
|
| 146 |
+
set_peft_model_state_dict(model, peft_state_dict, low_cpu_mem_usage=True)
|
| 147 |
+
print(model.linear.lora_A["default"].weight.device.type == "cpu") # should be True
|
| 148 |
+
```
|
peft/docs/source/developer_guides/mixed_models.md
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
-->
|
| 12 |
+
|
| 13 |
+
# Mixed adapter types
|
| 14 |
+
|
| 15 |
+
Normally, it isn't possible to mix different adapter types in 🤗 PEFT. You can create a PEFT model with two different LoRA adapters (which can have different config options), but it is not possible to combine a LoRA and LoHa adapter. With [`PeftMixedModel`] however, this works as long as the adapter types are compatible. The main purpose of allowing mixed adapter types is to combine trained adapters for inference. While it is possible to train a mixed adapter model, this has not been tested and is not recommended.
|
| 16 |
+
|
| 17 |
+
To load different adapter types into a PEFT model, use [`PeftMixedModel`] instead of [`PeftModel`]:
|
| 18 |
+
|
| 19 |
+
```py
|
| 20 |
+
from peft import PeftMixedModel
|
| 21 |
+
|
| 22 |
+
base_model = ... # load the base model, e.g. from transformers
|
| 23 |
+
# load first adapter, which will be called "default"
|
| 24 |
+
peft_model = PeftMixedModel.from_pretrained(base_model, <path_to_adapter1>)
|
| 25 |
+
peft_model.load_adapter(<path_to_adapter2>, adapter_name="other")
|
| 26 |
+
peft_model.set_adapter(["default", "other"])
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
The [`~PeftMixedModel.set_adapter`] method is necessary to activate both adapters, otherwise only the first adapter would be active. You can keep adding more adapters by calling [`~PeftModel.add_adapter`] repeatedly.
|
| 30 |
+
|
| 31 |
+
[`PeftMixedModel`] does not support saving and loading mixed adapters. The adapters should already be trained, and loading the model requires a script to be run each time.
|
| 32 |
+
|
| 33 |
+
## Tips
|
| 34 |
+
|
| 35 |
+
- Not all adapter types can be combined. See [`peft.tuners.mixed.COMPATIBLE_TUNER_TYPES`](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/src/peft/tuners/mixed/model.py#L35) for a list of compatible types. An error will be raised if you try to combine incompatible adapter types.
|
| 36 |
+
- It is possible to mix multiple adapters of the same type which can be useful for combining adapters with very different configs.
|
| 37 |
+
- If you want to combine a lot of different adapters, the most performant way to do it is to consecutively add the same adapter types. For example, add LoRA1, LoRA2, LoHa1, LoHa2 in this order, instead of LoRA1, LoHa1, LoRA2, and LoHa2. While the order can affect the output, there is no inherently *best* order, so it is best to choose the fastest one.
|
peft/docs/source/developer_guides/model_merging.md
ADDED
|
@@ -0,0 +1,164 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
|
| 12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 13 |
+
rendered properly in your Markdown viewer.
|
| 14 |
+
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# Model merging
|
| 18 |
+
|
| 19 |
+
Training a model for each task can be costly, take up storage space, and the models aren't able to learn new information to improve their performance. Multitask learning can overcome some of these limitations by training a model to learn several tasks, but it is expensive to train and designing a dataset for it is challenging. *Model merging* offers a solution to these challenges by combining multiple pretrained models into one model, giving it the combined abilities of each individual model without any additional training.
|
| 20 |
+
|
| 21 |
+
PEFT provides several methods for merging models like a linear or SVD combination. This guide focuses on two methods that are more efficient for merging LoRA adapters by eliminating redundant parameters:
|
| 22 |
+
|
| 23 |
+
* [TIES](https://hf.co/papers/2306.01708) - TrIm, Elect, and Merge (TIES) is a three-step method for merging models. First, redundant parameters are trimmed, then conflicting signs are resolved into an aggregated vector, and finally the parameters whose signs are the same as the aggregate sign are averaged. This method takes into account that some values (redundant and sign disagreement) can degrade performance in the merged model.
|
| 24 |
+
* [DARE](https://hf.co/papers/2311.03099) - Drop And REscale is a method that can be used to prepare for other model merging methods like TIES. It works by randomly dropping parameters according to a drop rate and rescaling the remaining parameters. This helps to reduce the number of redundant and potentially interfering parameters among multiple models.
|
| 25 |
+
|
| 26 |
+
Models are merged with the [`~LoraModel.add_weighted_adapter`] method, and the specific model merging method is specified in the `combination_type` parameter.
|
| 27 |
+
|
| 28 |
+
## Merge method
|
| 29 |
+
|
| 30 |
+
With TIES and DARE, merging is enabled by setting `combination_type` and `density` to a value of the weights to keep from the individual models. For example, let's merge three finetuned [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) models: [tinyllama_lora_nobots](https://huggingface.co/smangrul/tinyllama_lora_norobots), [tinyllama_lora_sql](https://huggingface.co/smangrul/tinyllama_lora_sql), and [tinyllama_lora_adcopy](https://huggingface.co/smangrul/tinyllama_lora_adcopy).
|
| 31 |
+
|
| 32 |
+
<Tip warninig={true}>
|
| 33 |
+
|
| 34 |
+
When you're attempting to merge fully trained models with TIES, you should be aware of any special tokens each model may have added to the embedding layer which are not a part of the original checkpoint's vocabulary. This may cause an issue because each model may have added a special token to the same embedding position. If this is the case, you should use the [`~transformers.PreTrainedModel.resize_token_embeddings`] method to avoid merging the special tokens at the same embedding index.
|
| 35 |
+
|
| 36 |
+
<br>
|
| 37 |
+
|
| 38 |
+
This shouldn't be an issue if you're only merging LoRA adapters trained from the same base model.
|
| 39 |
+
|
| 40 |
+
</Tip>
|
| 41 |
+
|
| 42 |
+
Load a base model and can use the [`~PeftModel.load_adapter`] method to load and assign each adapter a name:
|
| 43 |
+
|
| 44 |
+
```py
|
| 45 |
+
from peft import PeftConfig, PeftModel
|
| 46 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 47 |
+
import torch
|
| 48 |
+
|
| 49 |
+
config = PeftConfig.from_pretrained("smangrul/tinyllama_lora_norobots")
|
| 50 |
+
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, load_in_4bit=True, device_map="auto").eval()
|
| 51 |
+
tokenizer = AutoTokenizer.from_pretrained("smangrul/tinyllama_lora_norobots")
|
| 52 |
+
|
| 53 |
+
model.config.vocab_size = 32005
|
| 54 |
+
model.resize_token_embeddings(32005)
|
| 55 |
+
|
| 56 |
+
model = PeftModel.from_pretrained(model, "smangrul/tinyllama_lora_norobots", adapter_name="norobots")
|
| 57 |
+
_ = model.load_adapter("smangrul/tinyllama_lora_sql", adapter_name="sql")
|
| 58 |
+
_ = model.load_adapter("smangrul/tinyllama_lora_adcopy", adapter_name="adcopy")
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
Set the adapters, weights, `adapter_name`, `combination_type`, and `density` with the [`~LoraModel.add_weighted_adapter`] method.
|
| 62 |
+
|
| 63 |
+
<hfoptions id="merge-method">
|
| 64 |
+
<hfoption id="TIES">
|
| 65 |
+
|
| 66 |
+
Weight values greater than `1.0` typically produce better results because they preserve the correct scale. A good default starting value for the weights is to set all values to `1.0`.
|
| 67 |
+
|
| 68 |
+
```py
|
| 69 |
+
adapters = ["norobots", "adcopy", "sql"]
|
| 70 |
+
weights = [2.0, 1.0, 1.0]
|
| 71 |
+
adapter_name = "merge"
|
| 72 |
+
density = 0.2
|
| 73 |
+
model.add_weighted_adapter(adapters, weights, adapter_name, combination_type="ties", density=density)
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
</hfoption>
|
| 77 |
+
<hfoption id="DARE">
|
| 78 |
+
|
| 79 |
+
```py
|
| 80 |
+
adapters = ["norobots", "adcopy", "sql"]
|
| 81 |
+
weights = [2.0, 0.3, 0.7]
|
| 82 |
+
adapter_name = "merge"
|
| 83 |
+
density = 0.2
|
| 84 |
+
model.add_weighted_adapter(adapters, weights, adapter_name, combination_type="dare_ties", density=density)
|
| 85 |
+
```
|
| 86 |
+
|
| 87 |
+
</hfoption>
|
| 88 |
+
</hfoptions>
|
| 89 |
+
|
| 90 |
+
Set the newly merged model as the active model with the [`~LoraModel.set_adapter`] method.
|
| 91 |
+
|
| 92 |
+
```py
|
| 93 |
+
model.set_adapter("merge")
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
Now you can use the merged model as an instruction-tuned model to write ad copy or SQL queries!
|
| 97 |
+
|
| 98 |
+
<hfoptions id="ties">
|
| 99 |
+
<hfoption id="instruct">
|
| 100 |
+
|
| 101 |
+
```py
|
| 102 |
+
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
|
| 103 |
+
messages = [
|
| 104 |
+
{"role": "user", "content": "Write an essay about Generative AI."},
|
| 105 |
+
]
|
| 106 |
+
text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
|
| 107 |
+
inputs = tokenizer(text, return_tensors="pt")
|
| 108 |
+
inputs = {k: v.to(device) for k, v in inputs.items()}
|
| 109 |
+
outputs = model.generate(**inputs, max_new_tokens=256, do_sample=True, top_p=0.95, temperature=0.2, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
|
| 110 |
+
print(tokenizer.decode(outputs[0]))
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
</hfoption>
|
| 114 |
+
<hfoption id="ad copy">
|
| 115 |
+
|
| 116 |
+
```py
|
| 117 |
+
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
|
| 118 |
+
messages = [
|
| 119 |
+
{"role": "system", "content": "Create a text ad given the following product and description."},
|
| 120 |
+
{"role": "user", "content": "Product: Sony PS5 PlayStation Console\nDescription: The PS5 console unleashes new gaming possibilities that you never anticipated."},
|
| 121 |
+
]
|
| 122 |
+
text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
|
| 123 |
+
inputs = tokenizer(text, return_tensors="pt")
|
| 124 |
+
inputs = {k: v.to(device) for k, v in inputs.items()}
|
| 125 |
+
outputs = model.generate(**inputs, max_new_tokens=128, do_sample=True, top_p=0.95, temperature=0.2, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
|
| 126 |
+
print(tokenizer.decode(outputs[0]))
|
| 127 |
+
```
|
| 128 |
+
|
| 129 |
+
</hfoption>
|
| 130 |
+
<hfoption id="SQL">
|
| 131 |
+
|
| 132 |
+
```py
|
| 133 |
+
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
|
| 134 |
+
|
| 135 |
+
text = """Table: 2-11365528-2
|
| 136 |
+
Columns: ['Team', 'Head Coach', 'President', 'Home Ground', 'Location']
|
| 137 |
+
Natural Query: Who is the Head Coach of the team whose President is Mario Volarevic?
|
| 138 |
+
SQL Query:"""
|
| 139 |
+
|
| 140 |
+
inputs = tokenizer(text, return_tensors="pt")
|
| 141 |
+
inputs = {k: v.to(device) for k, v in inputs.items()}
|
| 142 |
+
outputs = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1, eos_token_id=tokenizer("</s>").input_ids[-1])
|
| 143 |
+
print(tokenizer.decode(outputs[0]))
|
| 144 |
+
```
|
| 145 |
+
|
| 146 |
+
</hfoption>
|
| 147 |
+
</hfoptions>
|
| 148 |
+
|
| 149 |
+
|
| 150 |
+
## Merging (IA)³ Models
|
| 151 |
+
The (IA)³ models facilitate linear merging of adapters. To merge adapters in an (IA)³ model, utilize the `add_weighted_adapter` method from the `IA3Model` class. This method is analogous to the `add_weighted_adapter` method used in `LoraModel`, with the key difference being the absence of the `combination_type` parameter. For example, to merge three (IA)³ adapters into a PEFT model, you would proceed as follows:
|
| 152 |
+
|
| 153 |
+
```py
|
| 154 |
+
adapters = ["adapter1", "adapter2", "adapter3"]
|
| 155 |
+
weights = [0.4, 0.3, 0.3]
|
| 156 |
+
adapter_name = "merge"
|
| 157 |
+
model.add_weighted_adapter(adapters, weights, adapter_name)
|
| 158 |
+
```
|
| 159 |
+
|
| 160 |
+
It is recommended that the weights sum to 1.0 to preserve the scale of the model. The merged model can then be set as the active model using the `set_adapter` method:
|
| 161 |
+
|
| 162 |
+
```py
|
| 163 |
+
model.set_adapter("merge")
|
| 164 |
+
```
|
peft/docs/source/developer_guides/quantization.md
ADDED
|
@@ -0,0 +1,294 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
|
| 12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 13 |
+
rendered properly in your Markdown viewer.
|
| 14 |
+
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# Quantization
|
| 18 |
+
|
| 19 |
+
Quantization represents data with fewer bits, making it a useful technique for reducing memory-usage and accelerating inference especially when it comes to large language models (LLMs). There are several ways to quantize a model including:
|
| 20 |
+
|
| 21 |
+
* optimizing which model weights are quantized with the [AWQ](https://hf.co/papers/2306.00978) algorithm
|
| 22 |
+
* independently quantizing each row of a weight matrix with the [GPTQ](https://hf.co/papers/2210.17323) algorithm
|
| 23 |
+
* quantizing to 8-bit and 4-bit precision with the [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) library
|
| 24 |
+
* quantizing to as low as 2-bit precision with the [AQLM](https://huggingface.co/papers/2401.06118) algorithm
|
| 25 |
+
|
| 26 |
+
However, after a model is quantized it isn't typically further trained for downstream tasks because training can be unstable due to the lower precision of the weights and activations. But since PEFT methods only add *extra* trainable parameters, this allows you to train a quantized model with a PEFT adapter on top! Combining quantization with PEFT can be a good strategy for training even the largest models on a single GPU. For example, [QLoRA](https://hf.co/papers/2305.14314) is a method that quantizes a model to 4-bits and then trains it with LoRA. This method allows you to finetune a 65B parameter model on a single 48GB GPU!
|
| 27 |
+
|
| 28 |
+
In this guide, you'll see how to quantize a model to 4-bits and train it with LoRA.
|
| 29 |
+
|
| 30 |
+
## Quantize a model
|
| 31 |
+
|
| 32 |
+
[bitsandbytes](https://github.com/TimDettmers/bitsandbytes) is a quantization library with a Transformers integration. With this integration, you can quantize a model to 8 or 4-bits and enable many other options by configuring the [`~transformers.BitsAndBytesConfig`] class. For example, you can:
|
| 33 |
+
|
| 34 |
+
* set `load_in_4bit=True` to quantize the model to 4-bits when you load it
|
| 35 |
+
* set `bnb_4bit_quant_type="nf4"` to use a special 4-bit data type for weights initialized from a normal distribution
|
| 36 |
+
* set `bnb_4bit_use_double_quant=True` to use a nested quantization scheme to quantize the already quantized weights
|
| 37 |
+
* set `bnb_4bit_compute_dtype=torch.bfloat16` to use bfloat16 for faster computation
|
| 38 |
+
|
| 39 |
+
```py
|
| 40 |
+
import torch
|
| 41 |
+
from transformers import BitsAndBytesConfig
|
| 42 |
+
|
| 43 |
+
config = BitsAndBytesConfig(
|
| 44 |
+
load_in_4bit=True,
|
| 45 |
+
bnb_4bit_quant_type="nf4",
|
| 46 |
+
bnb_4bit_use_double_quant=True,
|
| 47 |
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
| 48 |
+
)
|
| 49 |
+
```
|
| 50 |
+
|
| 51 |
+
Pass the `config` to the [`~transformers.AutoModelForCausalLM.from_pretrained`] method.
|
| 52 |
+
|
| 53 |
+
```py
|
| 54 |
+
from transformers import AutoModelForCausalLM
|
| 55 |
+
|
| 56 |
+
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=config)
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
Next, you should call the [`~peft.utils.prepare_model_for_kbit_training`] function to preprocess the quantized model for training.
|
| 60 |
+
|
| 61 |
+
```py
|
| 62 |
+
from peft import prepare_model_for_kbit_training
|
| 63 |
+
|
| 64 |
+
model = prepare_model_for_kbit_training(model)
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
Now that the quantized model is ready, let's set up a configuration.
|
| 68 |
+
|
| 69 |
+
## LoraConfig
|
| 70 |
+
|
| 71 |
+
Create a [`LoraConfig`] with the following parameters (or choose your own):
|
| 72 |
+
|
| 73 |
+
```py
|
| 74 |
+
from peft import LoraConfig
|
| 75 |
+
|
| 76 |
+
config = LoraConfig(
|
| 77 |
+
r=16,
|
| 78 |
+
lora_alpha=8,
|
| 79 |
+
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
|
| 80 |
+
lora_dropout=0.05,
|
| 81 |
+
bias="none",
|
| 82 |
+
task_type="CAUSAL_LM"
|
| 83 |
+
)
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
Then use the [`get_peft_model`] function to create a [`PeftModel`] from the quantized model and configuration.
|
| 87 |
+
|
| 88 |
+
```py
|
| 89 |
+
from peft import get_peft_model
|
| 90 |
+
|
| 91 |
+
model = get_peft_model(model, config)
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
You're all set for training with whichever training method you prefer!
|
| 95 |
+
|
| 96 |
+
### LoftQ initialization
|
| 97 |
+
|
| 98 |
+
[LoftQ](https://hf.co/papers/2310.08659) initializes LoRA weights such that the quantization error is minimized, and it can improve performance when training quantized models. To get started, follow [these instructions](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning).
|
| 99 |
+
|
| 100 |
+
In general, for LoftQ to work best, it is recommended to target as many layers with LoRA as possible, since those not targeted cannot have LoftQ applied. This means that passing `LoraConfig(..., target_modules="all-linear")` will most likely give the best results. Also, you should use `nf4` as quant type in your quantization config when using 4bit quantization, i.e. `BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4")`.
|
| 101 |
+
|
| 102 |
+
### QLoRA-style training
|
| 103 |
+
|
| 104 |
+
QLoRA adds trainable weights to all the linear layers in the transformer architecture. Since the attribute names for these linear layers can vary across architectures, set `target_modules` to `"all-linear"` to add LoRA to all the linear layers:
|
| 105 |
+
|
| 106 |
+
```py
|
| 107 |
+
config = LoraConfig(target_modules="all-linear", ...)
|
| 108 |
+
```
|
| 109 |
+
|
| 110 |
+
## GPTQ quantization
|
| 111 |
+
|
| 112 |
+
You can learn more about gptq based `[2, 3, 4, 8]` bits quantization at [GPTQModel](https://github.com/ModelCloud/GPTQModel) and the Transformers [GPTQ](https://huggingface.co/docs/transformers/quantization/gptq) doc. Post-quant training, PEFT can use both [GPTQModel](https://github.com/ModelCloud/GPTQModel) or [AutoGPTQ](https://github.com/autogptq/autogptq) libraries, but we recommend GPTQModel because AutoGPTQ will be deprecated in a future release.
|
| 113 |
+
|
| 114 |
+
```bash
|
| 115 |
+
# gptqmodel install
|
| 116 |
+
pip install gptqmodel --no-build-isolation
|
| 117 |
+
```
|
| 118 |
+
|
| 119 |
+
```py
|
| 120 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
|
| 121 |
+
|
| 122 |
+
model_id = "facebook/opt-125m"
|
| 123 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 124 |
+
|
| 125 |
+
gptq_config = GPTQConfig(bits=4, group_size=128, dataset="wikitext2", tokenizer=tokenizer)
|
| 126 |
+
|
| 127 |
+
quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=gptq_config)
|
| 128 |
+
|
| 129 |
+
# save quantized model
|
| 130 |
+
quantized_model.save_pretrained("./opt-125m-gptq")
|
| 131 |
+
tokenizer.save_pretrained("./opt-125m-gptq")
|
| 132 |
+
```
|
| 133 |
+
|
| 134 |
+
Once quantized, you can post-train GPTQ models with PEFT APIs.
|
| 135 |
+
|
| 136 |
+
## AQLM quantization
|
| 137 |
+
|
| 138 |
+
Additive Quantization of Language Models ([AQLM](https://huggingface.co/papers/2401.06118)) is a Large Language Models compression method. It quantizes multiple weights together and takes advantage of interdependencies between them. AQLM represents groups of 8-16 weights as a sum of multiple vector codes. This allows it to compress models down to as low as 2-bit with considerably low accuracy losses.
|
| 139 |
+
|
| 140 |
+
Since the AQLM quantization process is computationally expensive, the use of prequantized models is recommended. A partial list of available models can be found in the official aqlm [repository](https://github.com/Vahe1994/AQLM).
|
| 141 |
+
|
| 142 |
+
The models support LoRA adapter tuning. To tune the quantized model you'll need to install the `aqlm` inference library: `pip install aqlm>=1.0.2`. Finetuned LoRA adapters shall be saved separately, as merging them with AQLM quantized weights is not possible.
|
| 143 |
+
|
| 144 |
+
```py
|
| 145 |
+
quantized_model = AutoModelForCausalLM.from_pretrained(
|
| 146 |
+
"BlackSamorez/Mixtral-8x7b-AQLM-2Bit-1x16-hf-test-dispatch",
|
| 147 |
+
torch_dtype="auto", device_map="auto", low_cpu_mem_usage=True,
|
| 148 |
+
)
|
| 149 |
+
|
| 150 |
+
peft_config = LoraConfig(...)
|
| 151 |
+
|
| 152 |
+
quantized_model = get_peft_model(quantized_model, peft_config)
|
| 153 |
+
```
|
| 154 |
+
|
| 155 |
+
You can refer to the [Google Colab](https://colab.research.google.com/drive/12GTp1FCj5_0SnnNQH18h_2XFh9vS_guX?usp=sharing) example for an overview of AQLM+LoRA finetuning.
|
| 156 |
+
|
| 157 |
+
## EETQ quantization
|
| 158 |
+
|
| 159 |
+
You can also perform LoRA fine-tuning on EETQ quantized models. [EETQ](https://github.com/NetEase-FuXi/EETQ) package offers simple and efficient way to perform 8-bit quantization, which is claimed to be faster than the `LLM.int8()` algorithm. First, make sure that you have a transformers version that is compatible with EETQ (e.g. by installing it from latest pypi or from source).
|
| 160 |
+
|
| 161 |
+
```py
|
| 162 |
+
import torch
|
| 163 |
+
from transformers import EetqConfig
|
| 164 |
+
|
| 165 |
+
config = EetqConfig("int8")
|
| 166 |
+
```
|
| 167 |
+
|
| 168 |
+
Pass the `config` to the [`~transformers.AutoModelForCausalLM.from_pretrained`] method.
|
| 169 |
+
|
| 170 |
+
```py
|
| 171 |
+
from transformers import AutoModelForCausalLM
|
| 172 |
+
|
| 173 |
+
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=config)
|
| 174 |
+
```
|
| 175 |
+
|
| 176 |
+
and create a `LoraConfig` and pass it to `get_peft_model`:
|
| 177 |
+
|
| 178 |
+
```py
|
| 179 |
+
from peft import LoraConfig, get_peft_model
|
| 180 |
+
|
| 181 |
+
config = LoraConfig(
|
| 182 |
+
r=16,
|
| 183 |
+
lora_alpha=8,
|
| 184 |
+
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
|
| 185 |
+
lora_dropout=0.05,
|
| 186 |
+
bias="none",
|
| 187 |
+
task_type="CAUSAL_LM"
|
| 188 |
+
)
|
| 189 |
+
|
| 190 |
+
model = get_peft_model(model, config)
|
| 191 |
+
```
|
| 192 |
+
|
| 193 |
+
## HQQ quantization
|
| 194 |
+
|
| 195 |
+
The models that are quantized using Half-Quadratic Quantization of Large Machine Learning Models ([HQQ](https://mobiusml.github.io/hqq_blog/)) support LoRA adapter tuning. To tune the quantized model, you'll need to install the `hqq` library with: `pip install hqq`.
|
| 196 |
+
|
| 197 |
+
```python
|
| 198 |
+
from hqq.engine.hf import HQQModelForCausalLM
|
| 199 |
+
|
| 200 |
+
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
|
| 201 |
+
|
| 202 |
+
quantized_model = HQQModelForCausalLM.from_quantized(save_dir_or_hfhub, device=device)
|
| 203 |
+
peft_config = LoraConfig(...)
|
| 204 |
+
quantized_model = get_peft_model(quantized_model, peft_config)
|
| 205 |
+
```
|
| 206 |
+
|
| 207 |
+
Or using transformers version that is compatible with HQQ (e.g. by installing it from latest pypi or from source).
|
| 208 |
+
|
| 209 |
+
```python
|
| 210 |
+
from transformers import HqqConfig, AutoModelForCausalLM
|
| 211 |
+
|
| 212 |
+
quant_config = HqqConfig(nbits=4, group_size=64)
|
| 213 |
+
quantized_model = AutoModelForCausalLM.from_pretrained(save_dir_or_hfhub, device_map=device_map, quantization_config=quant_config)
|
| 214 |
+
peft_config = LoraConfig(...)
|
| 215 |
+
quantized_model = get_peft_model(quantized_model, peft_config)
|
| 216 |
+
```
|
| 217 |
+
|
| 218 |
+
## torchao (PyTorch Architecture Optimization)
|
| 219 |
+
|
| 220 |
+
PEFT supports models quantized with [torchao](https://github.com/pytorch/ao) ("ao") for int8 quantization.
|
| 221 |
+
|
| 222 |
+
```python
|
| 223 |
+
from peft import LoraConfig, get_peft_model
|
| 224 |
+
from transformers import AutoModelForCausalLM, TorchAoConfig
|
| 225 |
+
|
| 226 |
+
model_id = ...
|
| 227 |
+
quantization_config = TorchAoConfig(quant_type="int8_weight_only")
|
| 228 |
+
base_model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config)
|
| 229 |
+
peft_config = LoraConfig(...)
|
| 230 |
+
model = get_peft_model(base_model, peft_config)
|
| 231 |
+
```
|
| 232 |
+
|
| 233 |
+
### Caveats:
|
| 234 |
+
|
| 235 |
+
- Use the most recent versions of torchao (>= v0.4.0) and transformers (> 4.42).
|
| 236 |
+
- Only linear layers are currently supported.
|
| 237 |
+
- `quant_type = "int4_weight_only"` is currently not supported.
|
| 238 |
+
- `NF4` is not implemented in transformers as of yet and is thus also not supported.
|
| 239 |
+
- DoRA only works with `quant_type = "int8_weight_only"` at the moment.
|
| 240 |
+
- There is explicit support for torchao when used with LoRA. However, when torchao quantizes a layer, its class does not change, only the type of the underlying tensor. For this reason, PEFT methods other than LoRA will generally also work with torchao, even if not explicitly supported. Be aware, however, that **merging only works correctly with LoRA and with `quant_type = "int8_weight_only"`**. If you use a different PEFT method or dtype, merging will likely result in an error, and even it doesn't, the results will still be incorrect.
|
| 241 |
+
|
| 242 |
+
## INC quantization
|
| 243 |
+
|
| 244 |
+
Intel Neural Compressor ([INC](https://github.com/intel/neural-compressor)) enables model quantization for various devices,
|
| 245 |
+
including Intel Gaudi accelerators (also known as HPU devices). You can perform LoRA fine-tuning on models that have been
|
| 246 |
+
quantized using INC. To use INC with PyTorch models, install the library with: `pip install neural-compressor[pt]`.
|
| 247 |
+
Quantizing a model to FP8 precision for HPU devices can be done with the following single-step quantization workflow:
|
| 248 |
+
|
| 249 |
+
```python
|
| 250 |
+
import torch
|
| 251 |
+
from neural_compressor.torch.quantization import FP8Config, convert, finalize_calibration, prepare
|
| 252 |
+
quant_configs = {
|
| 253 |
+
...
|
| 254 |
+
}
|
| 255 |
+
config = FP8Config(**quant_configs)
|
| 256 |
+
```
|
| 257 |
+
|
| 258 |
+
Pass the config to the `prepare` method, run inference to gather calibration stats, and call `finalize_calibration`
|
| 259 |
+
and `convert` methods to quantize model to FP8 precision:
|
| 260 |
+
|
| 261 |
+
```python
|
| 262 |
+
model = prepare(model, config)
|
| 263 |
+
# Run inference to collect calibration statistics
|
| 264 |
+
...
|
| 265 |
+
# Finalize calibration and convert the model to FP8 precision
|
| 266 |
+
finalize_calibration(model)
|
| 267 |
+
model = convert(model)
|
| 268 |
+
# Load PEFT LoRA adapter as usual
|
| 269 |
+
...
|
| 270 |
+
```
|
| 271 |
+
|
| 272 |
+
An example demonstrating how to load a PEFT LoRA adapter into an INC-quantized FLUX text-to-image model for HPU
|
| 273 |
+
devices is provided [here](https://github.com/huggingface/peft/blob/main/examples/stable_diffusion/inc_flux_lora_hpu.py).
|
| 274 |
+
|
| 275 |
+
|
| 276 |
+
### Caveats:
|
| 277 |
+
|
| 278 |
+
- `merge()` and `unmerge()` methods are currently not supported for INC-quantized models.
|
| 279 |
+
- Currently, only **Linear** INC-quantized layers are supported when loading PEFT adapters.
|
| 280 |
+
|
| 281 |
+
## Other Supported PEFT Methods
|
| 282 |
+
|
| 283 |
+
Besides LoRA, the following PEFT methods also support quantization:
|
| 284 |
+
|
| 285 |
+
- **VeRA** (supports bitsandbytes quantization)
|
| 286 |
+
- **AdaLoRA** (supports both bitsandbytes and GPTQ quantization)
|
| 287 |
+
- **(IA)³** (supports bitsandbytes quantization)
|
| 288 |
+
|
| 289 |
+
## Next steps
|
| 290 |
+
|
| 291 |
+
If you're interested in learning more about quantization, the following may be helpful:
|
| 292 |
+
|
| 293 |
+
* Learn more details about QLoRA and check out some benchmarks on its impact in the [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes) blog post.
|
| 294 |
+
* Read more about different quantization schemes in the Transformers [Quantization](https://hf.co/docs/transformers/main/quantization) guide.
|
peft/docs/source/developer_guides/torch_compile.md
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
|
| 12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 13 |
+
rendered properly in your Markdown viewer.
|
| 14 |
+
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# torch.compile
|
| 18 |
+
|
| 19 |
+
In PEFT, [torch.compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) works for some but not all features. The reason why it won't always work is because PEFT is highly dynamic in certain places (loading and switching between multiple adapters, for instance), which can cause trouble for `torch.compile`. In other places, `torch.compile` may work, but won't be as fast as expected because of graph breaks.
|
| 20 |
+
|
| 21 |
+
If you don't see an error, it doesn't necessarily mean that `torch.compile` worked correctly. It might give you an output, but the output is incorrect. This guide describes what works with `torch.compile` and what doesn't. For your own testing, we recommend using the latest PyTorch version, as `torch.compile` is constantly being improved.
|
| 22 |
+
|
| 23 |
+
> [!TIP]
|
| 24 |
+
> Unless indicated otherwise, the default `torch.compile` settings were used.
|
| 25 |
+
|
| 26 |
+
## Training and inference with `torch.compile`
|
| 27 |
+
|
| 28 |
+
These features **work** with `torch.compile`. Everything listed below was tested with a causal LM:
|
| 29 |
+
|
| 30 |
+
- Training with `Trainer` from 🤗 transformers
|
| 31 |
+
- Training with a custom PyTorch loop
|
| 32 |
+
- Inference
|
| 33 |
+
- Generation
|
| 34 |
+
|
| 35 |
+
The following adapters were tested successfully:
|
| 36 |
+
|
| 37 |
+
- AdaLoRA
|
| 38 |
+
- BOFT
|
| 39 |
+
- Bone
|
| 40 |
+
- IA³
|
| 41 |
+
- Layer Norm Tuning
|
| 42 |
+
- LoHa
|
| 43 |
+
- LoKr
|
| 44 |
+
- LoRA
|
| 45 |
+
- LoRA + DoRA
|
| 46 |
+
- LoRA applied to embedding layers
|
| 47 |
+
- OFT
|
| 48 |
+
- VeRA
|
| 49 |
+
- HRA
|
| 50 |
+
|
| 51 |
+
## Advanced PEFT features with `torch.compile`
|
| 52 |
+
|
| 53 |
+
Below are some of the more advanced PEFT features that **work**. They were all tested with LoRA.
|
| 54 |
+
|
| 55 |
+
- `modules_to_save` (i.e. `config = LoraConfig(..., modules_to_save=...)`)
|
| 56 |
+
- Merging adapters (one or multiple)
|
| 57 |
+
- Merging multiple adapters into one adapter (i.e. calling `model.add_weighted_adapter(...)`)
|
| 58 |
+
- Using PEFT adapters with quantization (bitsandbytes)
|
| 59 |
+
- Disabling adapters (i.e. using `with model.disable_adapter()`)
|
| 60 |
+
- Unloading (i.e. calling `model.merge_and_unload()`)
|
| 61 |
+
- Mixed adapter batches (i.e. calling `model(batch, adapter_names=["__base__", "default", "other", ...])`)
|
| 62 |
+
- Inference with multiple adapters (i.e. using `model.add_adapter` or `model.load_adapter` to load more than 1 adapter); for this, only call `torch.compile` _after_ loading all adapters
|
| 63 |
+
|
| 64 |
+
Generally, we can expect that if a feature works correctly with LoRA and is also supported by other adapter types, it should also work for that adapter type.
|
| 65 |
+
|
| 66 |
+
## Test cases
|
| 67 |
+
|
| 68 |
+
All the use cases listed above are tested inside of [`peft/tests/test_torch_compile.py`](https://github.com/huggingface/peft/blob/main/tests/test_torch_compile.py). If you want to check in more detail how we tested a certain feature, please go to that file and check the test that corresponds to your use case.
|
| 69 |
+
|
| 70 |
+
> [!TIP]
|
| 71 |
+
> If you have another use case where you know that `torch.compile` does or does not work with PEFT, please contribute by letting us know or by opening a PR to add this use case to the covered test cases.
|
peft/docs/source/developer_guides/troubleshooting.md
ADDED
|
@@ -0,0 +1,458 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
|
| 12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 13 |
+
rendered properly in your Markdown viewer.
|
| 14 |
+
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# Troubleshooting
|
| 18 |
+
|
| 19 |
+
If you encounter any issue when using PEFT, please check the following list of common issues and their solutions.
|
| 20 |
+
|
| 21 |
+
## Examples don't work
|
| 22 |
+
|
| 23 |
+
Examples often rely on the most recent package versions, so please ensure they're up-to-date. In particular, check the following package versions:
|
| 24 |
+
|
| 25 |
+
- `peft`
|
| 26 |
+
- `transformers`
|
| 27 |
+
- `accelerate`
|
| 28 |
+
- `torch`
|
| 29 |
+
|
| 30 |
+
In general, you can update the package version by running this command inside your Python environment:
|
| 31 |
+
|
| 32 |
+
```bash
|
| 33 |
+
python -m pip install -U <package_name>
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
Installing PEFT from source is useful for keeping up with the latest developments:
|
| 37 |
+
|
| 38 |
+
```bash
|
| 39 |
+
python -m pip install git+https://github.com/huggingface/peft
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
## Dtype-related issues
|
| 43 |
+
|
| 44 |
+
### ValueError: Attempting to unscale FP16 gradients
|
| 45 |
+
|
| 46 |
+
This error probably occurred because the model was loaded with `torch_dtype=torch.float16` and then used in an automatic mixed precision (AMP) context, e.g. by setting `fp16=True` in the [`~transformers.Trainer`] class from 🤗 Transformers. The reason is that when using AMP, trainable weights should never use fp16. To make this work without loading the whole model in fp32, add the following to your code:
|
| 47 |
+
|
| 48 |
+
```python
|
| 49 |
+
peft_model = get_peft_model(...)
|
| 50 |
+
|
| 51 |
+
# add this:
|
| 52 |
+
for param in model.parameters():
|
| 53 |
+
if param.requires_grad:
|
| 54 |
+
param.data = param.data.float()
|
| 55 |
+
|
| 56 |
+
# proceed as usual
|
| 57 |
+
trainer = Trainer(model=peft_model, fp16=True, ...)
|
| 58 |
+
trainer.train()
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
Alternatively, you can use the [`~utils.cast_mixed_precision_params`] function to correctly cast the weights:
|
| 62 |
+
|
| 63 |
+
```python
|
| 64 |
+
from peft import cast_mixed_precision_params
|
| 65 |
+
|
| 66 |
+
peft_model = get_peft_model(...)
|
| 67 |
+
cast_mixed_precision_params(peft_model, dtype=torch.float16)
|
| 68 |
+
|
| 69 |
+
# proceed as usual
|
| 70 |
+
trainer = Trainer(model=peft_model, fp16=True, ...)
|
| 71 |
+
trainer.train()
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
> [!TIP]
|
| 75 |
+
> Starting from PEFT version v0.12.0, PEFT automatically promotes the dtype of adapter weights from `torch.float16` and `torch.bfloat16` to `torch.float32` where appropriate. To _prevent_ this behavior, you can pass `autocast_adapter_dtype=False` to [`~get_peft_model`], to [`~PeftModel.from_pretrained`], and to [`~PeftModel.load_adapter`].
|
| 76 |
+
|
| 77 |
+
### Selecting the dtype of the adapter
|
| 78 |
+
|
| 79 |
+
Most PEFT methods, like LoRA, work by adding trainable adapter weights. By default, those weights are stored in float32 dtype (fp32), i.e. at a relatively high precision. Therefore, even if the base model is loaded in float16 (fp16) or bfloat16 (bf16), the adapter weights are float32. When the adapter results are calculated during the forward pass, the input will typically be in the dtype of the base model, thus it will be upcast to float32 if necessary, then cast back to the original dtype.
|
| 80 |
+
|
| 81 |
+
If you prefer to have the adapter weights in the lower precision of the base model, i.e. in float16 or bfloat16, you can pass `autocast_adapter_dtype=False` when creating the model ([`~get_peft_model`]) or loading the model ([`~PeftModel.from_pretrained`]). There are some advantages and disadvantages to this:
|
| 82 |
+
|
| 83 |
+
Advantages of half precision adapter:
|
| 84 |
+
- computation slightly faster
|
| 85 |
+
- slightly less memory
|
| 86 |
+
- smaller file size of checkpoint (half the size)
|
| 87 |
+
|
| 88 |
+
Disadvantages of half precision adapter:
|
| 89 |
+
- slightly worse loss
|
| 90 |
+
- higher risk of overflow or underflow
|
| 91 |
+
|
| 92 |
+
Note that for most use cases, overall runtime and memory cost will be determined by the size of the base model and by the dataset, while the dtype of the PEFT adapter will only have a small impact.
|
| 93 |
+
|
| 94 |
+
## Bad results from a loaded PEFT model
|
| 95 |
+
|
| 96 |
+
There can be several reasons for getting a poor result from a loaded PEFT model which are listed below. If you're still unable to troubleshoot the problem, see if anyone else had a similar [issue](https://github.com/huggingface/peft/issues) on GitHub, and if you can't find any, open a new issue.
|
| 97 |
+
|
| 98 |
+
When opening an issue, it helps a lot if you provide a minimal code example that reproduces the issue. Also, please report if the loaded model performs at the same level as the model did before fine-tuning, if it performs at a random level, or if it is only slightly worse than expected. This information helps us identify the problem more quickly.
|
| 99 |
+
|
| 100 |
+
### Random deviations
|
| 101 |
+
|
| 102 |
+
If your model outputs are not exactly the same as previous runs, there could be an issue with random elements. For example:
|
| 103 |
+
|
| 104 |
+
1. please ensure it is in `.eval()` mode, which is important, for instance, if the model uses dropout
|
| 105 |
+
2. if you use [`~transformers.GenerationMixin.generate`] on a language model, there could be random sampling, so obtaining the same result requires setting a random seed
|
| 106 |
+
3. if you used quantization and merged the weights, small deviations are expected due to rounding errors
|
| 107 |
+
|
| 108 |
+
### Incorrectly loaded model
|
| 109 |
+
|
| 110 |
+
Please ensure that you load the model correctly. A common error is trying to load a _trained_ model with [`get_peft_model`] which is incorrect. Instead, the loading code should look like this:
|
| 111 |
+
|
| 112 |
+
```python
|
| 113 |
+
from peft import PeftModel, PeftConfig
|
| 114 |
+
|
| 115 |
+
base_model = ... # to load the base model, use the same code as when you trained it
|
| 116 |
+
config = PeftConfig.from_pretrained(peft_model_id)
|
| 117 |
+
peft_model = PeftModel.from_pretrained(base_model, peft_model_id)
|
| 118 |
+
```
|
| 119 |
+
|
| 120 |
+
### Randomly initialized layers
|
| 121 |
+
|
| 122 |
+
For some tasks, it is important to correctly configure `modules_to_save` in the config to account for randomly initialized layers.
|
| 123 |
+
|
| 124 |
+
As an example, this is necessary if you use LoRA to fine-tune a language model for sequence classification because 🤗 Transformers adds a randomly initialized classification head on top of the model. If you do not add this layer to `modules_to_save`, the classification head won't be saved. The next time you load the model, you'll get a _different_ randomly initialized classification head, resulting in completely different results.
|
| 125 |
+
|
| 126 |
+
PEFT tries to correctly guess the `modules_to_save` if you provide the `task_type` argument in the config. This should work for transformers models that follow the standard naming scheme. It is always a good idea to double check though because we can't guarantee all models follow the naming scheme.
|
| 127 |
+
|
| 128 |
+
When you load a transformers model that has randomly initialized layers, you should see a warning along the lines of:
|
| 129 |
+
|
| 130 |
+
```
|
| 131 |
+
Some weights of <MODEL> were not initialized from the model checkpoint at <ID> and are newly initialized: [<LAYER_NAMES>].
|
| 132 |
+
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
|
| 133 |
+
```
|
| 134 |
+
|
| 135 |
+
The mentioned layers should be added to `modules_to_save` in the config to avoid the described problem.
|
| 136 |
+
|
| 137 |
+
> [!TIP]
|
| 138 |
+
> As an example, when loading a model that is using the DeBERTa architecture for sequence classification, you'll see a warning that the following weights are newly initialized: `['classifier.bias', 'classifier.weight', 'pooler.dense.bias', 'pooler.dense.weight']`. From this, it follows that the `classifier` and `pooler` layers should be added to: `modules_to_save=["classifier", "pooler"]`.
|
| 139 |
+
|
| 140 |
+
### Extending the vocabulary
|
| 141 |
+
|
| 142 |
+
For many language fine-tuning tasks, extending the model's vocabulary is necessary since new tokens are being introduced. This requires extending the embedding layer to account for the new tokens and, depending on the fine-tuning method, also storing the embedding layer in addition to the adapter weights when saving the adapter. There are a few ways of achieving this ordered by parameter effectiveness:
|
| 143 |
+
|
| 144 |
+
- [trainable tokens](../package_reference/trainable_tokens), train only the specified tokens, optionally store only the updated values
|
| 145 |
+
- training an adapter on the embedding matrix, optionally store only the updated values
|
| 146 |
+
- full-finetuning of the embedding layer
|
| 147 |
+
|
| 148 |
+
#### Using trainable tokens
|
| 149 |
+
|
| 150 |
+
Let's start with trainable tokens, in this case its [LoRA integration](../developer_guides/lora#efficiently-train-tokens-alongside-lora). If you're interested in only training the new embeddings and nothing else, refer to the [standalone documentation](../package_reference/trainable_tokens).
|
| 151 |
+
|
| 152 |
+
To enable selective token training of the embedding layer, you'll need to supply the token ids of your newly added tokens via the `trainable_token_indices` parameter. Optionally you can specify which layer to target if there is more than one embedding layer. For a Mistral model this could look like this:
|
| 153 |
+
|
| 154 |
+
```python
|
| 155 |
+
new_tokens = ['<think>', '</think>']
|
| 156 |
+
tokenizer.add_tokens(new_tokens)
|
| 157 |
+
base_model.resize_token_embeddings(len(tokenizer))
|
| 158 |
+
|
| 159 |
+
lora_config = LoraConfig(
|
| 160 |
+
...,
|
| 161 |
+
trainable_token_indices={'embed_tokens': tokenizer.convert_tokens_to_ids(new_tokens)},
|
| 162 |
+
)
|
| 163 |
+
```
|
| 164 |
+
|
| 165 |
+
If your model uses tied weights (such as the `lm_head`), trainable tokens will try to resolve those and keep them updated as well, so in that case there should be no need for adding `modules_to_save=["lm_head"]`. This only works if the model uses the Transformers convention for tying weights.
|
| 166 |
+
|
| 167 |
+
Saving the model with `model.save_pretrained` may save the full embedding matrix instead of
|
| 168 |
+
only the difference as a precaution because the embedding matrix was resized. To save space you can disable this behavior by setting `save_embedding_layers=False` when calling `save_pretrained`. This is safe to do as long as you don't modify the embedding matrix through other means as well, as such changes will be not tracked by trainable tokens.
|
| 169 |
+
|
| 170 |
+
#### Using an adapter, e.g. LoRA
|
| 171 |
+
|
| 172 |
+
Prepare the embedding layer by adding it to the `target_modules` of your adapter config. For example, the Mistral config could look like this:
|
| 173 |
+
|
| 174 |
+
```python
|
| 175 |
+
config = LoraConfig(..., target_modules=["embed_tokens", "lm_head", "q_proj", "v_proj"])
|
| 176 |
+
```
|
| 177 |
+
|
| 178 |
+
Once added to `target_modules`, PEFT automatically stores the embedding layer when saving the adapter if the model has the [`~transformers.PreTrainedModel.get_input_embeddings`] and [`~transformers.PreTrainedModel.get_output_embeddings`]. This is generally the case for Transformers models.
|
| 179 |
+
|
| 180 |
+
If the model's embedding layer doesn't follow the Transformer's naming scheme but nevertheless implements `get_input_embeddings`, you can still save it by manually passing `save_embedding_layers=True` when saving the adapter:
|
| 181 |
+
|
| 182 |
+
```python
|
| 183 |
+
model = get_peft_model(...)
|
| 184 |
+
# train the model
|
| 185 |
+
model.save_pretrained("my_adapter", save_embedding_layers=True)
|
| 186 |
+
```
|
| 187 |
+
|
| 188 |
+
For inference, load the base model first and resize it the same way you did before you trained the model. After you've resized the base model, you can load the PEFT checkpoint.
|
| 189 |
+
|
| 190 |
+
For a complete example, please check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/causal_language_modeling/peft_lora_clm_with_additional_tokens.ipynb).
|
| 191 |
+
|
| 192 |
+
#### Full fine-tuning
|
| 193 |
+
|
| 194 |
+
Full fine-tuning is more costly in terms of VRAM or storage space but if all else fails, you can fall back to this and see if it works for you. Achieve it by adding the name of the embedding layer to `modules_to_save`. Note that you need to add tied layers as well, e.g. `lm_head`. Example for a Mistral model with LoRA:
|
| 195 |
+
|
| 196 |
+
```python
|
| 197 |
+
config = LoraConfig(..., modules_to_save=["embed_tokens", "lm_head"], target_modules=["q_proj", "v_proj"])
|
| 198 |
+
```
|
| 199 |
+
|
| 200 |
+
### Getting a warning about "weights not being initialized from the model checkpoint"
|
| 201 |
+
|
| 202 |
+
When you load your PEFT model which has been trained on a task (for example, classification), you may get a warning like:
|
| 203 |
+
|
| 204 |
+
> Some weights of LlamaForSequenceClassification were not initialized from the model checkpoint at meta-llama/Llama-3.2-1B and are newly initialized: ['score.weight']. You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
|
| 205 |
+
|
| 206 |
+
Although this looks scary, it is most likely nothing to worry about. This warning comes from Transformers, and it isn't a PEFT specific warning. It lets you know that a randomly initialized classification head (`score`) is attached to the base model, and the head must be trained to produce sensible predictions.
|
| 207 |
+
|
| 208 |
+
When you get this warning _before_ training the model, PEFT automatically takes care of making the classification head trainable if you correctly passed the `task_type` argument to the PEFT config.
|
| 209 |
+
|
| 210 |
+
```python
|
| 211 |
+
from peft import LoraConfig, TaskType
|
| 212 |
+
|
| 213 |
+
lora_config = LoraConfig(..., task_type=TaskType.SEQ_CLS)
|
| 214 |
+
```
|
| 215 |
+
|
| 216 |
+
If your classification head does not follow the usual naming conventions from Transformers (which is rare), you have to explicitly tell PEFT the name of the head in `modules_to_save`.
|
| 217 |
+
|
| 218 |
+
```python
|
| 219 |
+
lora_config = LoraConfig(..., modules_to_save=["name-of-classification-head"])
|
| 220 |
+
```
|
| 221 |
+
|
| 222 |
+
To check the name of the classification head, print the model and it should be the last module.
|
| 223 |
+
|
| 224 |
+
If you get this warning from your inference code, i.e. _after_ training the model, when you load the PEFT model, you always have to load the Transformers model first. Since Transformers does not know that you will load PEFT weights afterwards, it still gives the warning.
|
| 225 |
+
|
| 226 |
+
As always, it is best practice to ensure the model works correctly for inference by running some validation on it.
|
| 227 |
+
|
| 228 |
+
### Check layer and model status
|
| 229 |
+
|
| 230 |
+
Sometimes a PEFT model can end up in a bad state, especially when handling multiple adapters. There can be some confusion around what adapters exist, which one is active, which one is merged, etc. To help investigate this issue, call the [`~peft.PeftModel.get_layer_status`] and the [`~peft.PeftModel.get_model_status`] methods.
|
| 231 |
+
|
| 232 |
+
The [`~peft.PeftModel.get_layer_status`] method gives you a detailed overview of each targeted layer's active, merged, and available adapters.
|
| 233 |
+
|
| 234 |
+
```python
|
| 235 |
+
>>> from transformers import AutoModel
|
| 236 |
+
>>> from peft import get_peft_model, LoraConfig
|
| 237 |
+
|
| 238 |
+
>>> model_id = "google/flan-t5-small"
|
| 239 |
+
>>> model = AutoModel.from_pretrained(model_id)
|
| 240 |
+
>>> model = get_peft_model(model, LoraConfig())
|
| 241 |
+
|
| 242 |
+
>>> model.get_layer_status()
|
| 243 |
+
[TunerLayerStatus(name='model.encoder.block.0.layer.0.SelfAttention.q',
|
| 244 |
+
module_type='lora.Linear',
|
| 245 |
+
enabled=True,
|
| 246 |
+
active_adapters=['default'],
|
| 247 |
+
merged_adapters=[],
|
| 248 |
+
requires_grad={'default': True},
|
| 249 |
+
available_adapters=['default']),
|
| 250 |
+
TunerLayerStatus(name='model.encoder.block.0.layer.0.SelfAttention.v',
|
| 251 |
+
module_type='lora.Linear',
|
| 252 |
+
enabled=True,
|
| 253 |
+
active_adapters=['default'],
|
| 254 |
+
merged_adapters=[],
|
| 255 |
+
requires_grad={'default': True},
|
| 256 |
+
available_adapters=['default']),
|
| 257 |
+
...]
|
| 258 |
+
|
| 259 |
+
>>> model.get_model_status()
|
| 260 |
+
TunerModelStatus(
|
| 261 |
+
base_model_type='T5Model',
|
| 262 |
+
adapter_model_type='LoraModel',
|
| 263 |
+
peft_types={'default': 'LORA'},
|
| 264 |
+
trainable_params=344064,
|
| 265 |
+
total_params=60855680,
|
| 266 |
+
num_adapter_layers=48,
|
| 267 |
+
enabled=True,
|
| 268 |
+
active_adapters=['default'],
|
| 269 |
+
merged_adapters=[],
|
| 270 |
+
requires_grad={'default': True},
|
| 271 |
+
available_adapters=['default'],
|
| 272 |
+
)
|
| 273 |
+
```
|
| 274 |
+
|
| 275 |
+
In the model state output, you should look out for entries that say `"irregular"`. This means PEFT detected an inconsistent state in the model. For instance, if `merged_adapters="irregular"`, it means that for at least one adapter, it was merged on some target modules but not on others. The inference results will most likely be incorrect as a result.
|
| 276 |
+
|
| 277 |
+
The best way to resolve this issue is to reload the whole model and adapter checkpoint(s). Ensure that you don't perform any incorrect operations on the model, e.g. manually merging adapters on some modules but not others.
|
| 278 |
+
|
| 279 |
+
Convert the layer status into a pandas `DataFrame` for an easier visual inspection.
|
| 280 |
+
|
| 281 |
+
```python
|
| 282 |
+
from dataclasses import asdict
|
| 283 |
+
import pandas as pd
|
| 284 |
+
|
| 285 |
+
df = pd.DataFrame(asdict(layer) for layer in model.get_layer_status())
|
| 286 |
+
```
|
| 287 |
+
|
| 288 |
+
It is possible to get this information for non-PEFT models if they are using PEFT layers under the hood, but some information like the `base_model_type` or the `peft_types` cannot be determined in that case. As an example, you can call this on a [diffusers](https://huggingface.co/docs/diffusers/index) model like so:
|
| 289 |
+
|
| 290 |
+
```python
|
| 291 |
+
>>> import torch
|
| 292 |
+
>>> from diffusers import StableDiffusionPipeline
|
| 293 |
+
>>> from peft import get_model_status, get_layer_status
|
| 294 |
+
|
| 295 |
+
>>> path = "runwayml/stable-diffusion-v1-5"
|
| 296 |
+
>>> lora_id = "takuma104/lora-test-text-encoder-lora-target"
|
| 297 |
+
>>> pipe = StableDiffusionPipeline.from_pretrained(path, torch_dtype=torch.float16)
|
| 298 |
+
>>> pipe.load_lora_weights(lora_id, adapter_name="adapter-1")
|
| 299 |
+
>>> pipe.load_lora_weights(lora_id, adapter_name="adapter-2")
|
| 300 |
+
>>> pipe.set_lora_device(["adapter-2"], "cuda")
|
| 301 |
+
>>> get_layer_status(pipe.text_encoder)
|
| 302 |
+
[TunerLayerStatus(name='text_model.encoder.layers.0.self_attn.k_proj',
|
| 303 |
+
module_type='lora.Linear',
|
| 304 |
+
enabled=True,
|
| 305 |
+
active_adapters=['adapter-2'],
|
| 306 |
+
merged_adapters=[],
|
| 307 |
+
requires_grad={'adapter-1': False, 'adapter-2': True},
|
| 308 |
+
available_adapters=['adapter-1', 'adapter-2'],
|
| 309 |
+
devices={'adapter-1': ['cpu'], 'adapter-2': ['cuda']}),
|
| 310 |
+
TunerLayerStatus(name='text_model.encoder.layers.0.self_attn.v_proj',
|
| 311 |
+
module_type='lora.Linear',
|
| 312 |
+
enabled=True,
|
| 313 |
+
active_adapters=['adapter-2'],
|
| 314 |
+
merged_adapters=[],
|
| 315 |
+
requires_grad={'adapter-1': False, 'adapter-2': True},
|
| 316 |
+
devices={'adapter-1': ['cpu'], 'adapter-2': ['cuda']}),
|
| 317 |
+
...]
|
| 318 |
+
|
| 319 |
+
>>> get_model_status(pipe.unet)
|
| 320 |
+
TunerModelStatus(
|
| 321 |
+
base_model_type='other',
|
| 322 |
+
adapter_model_type='None',
|
| 323 |
+
peft_types={},
|
| 324 |
+
trainable_params=797184,
|
| 325 |
+
total_params=861115332,
|
| 326 |
+
num_adapter_layers=128,
|
| 327 |
+
enabled=True,
|
| 328 |
+
active_adapters=['adapter-2'],
|
| 329 |
+
merged_adapters=[],
|
| 330 |
+
requires_grad={'adapter-1': False, 'adapter-2': True},
|
| 331 |
+
available_adapters=['adapter-1', 'adapter-2'],
|
| 332 |
+
devices={'adapter-1': ['cpu'], 'adapter-2': ['cuda']},
|
| 333 |
+
)
|
| 334 |
+
```
|
| 335 |
+
|
| 336 |
+
## Speed
|
| 337 |
+
|
| 338 |
+
### Loading adapter weights is slow
|
| 339 |
+
|
| 340 |
+
Loading adapters like LoRA weights should generally be fast compared to loading the base model. However, there can be use cases where the adapter weights are quite large or where users need to load a large number of adapters -- the loading time can add up in this case. The reason for this is that the adapter weights are first initialized and then overridden by the loaded weights, which is wasteful. To speed up the loading time, you can pass the `low_cpu_mem_usage=True` argument to [`~PeftModel.from_pretrained`] and [`~PeftModel.load_adapter`].
|
| 341 |
+
|
| 342 |
+
> [!TIP]
|
| 343 |
+
> If this option works well across different use cases, it may become the default for adapter loading in the future.
|
| 344 |
+
|
| 345 |
+
|
| 346 |
+
## Reproducibility
|
| 347 |
+
|
| 348 |
+
### Models using batch norm
|
| 349 |
+
|
| 350 |
+
When loading a trained PEFT model where the base model uses batch norm (e.g. `torch.nn.BatchNorm1d` or `torch.nn.BatchNorm2d`), you may find that you cannot reproduce the exact same outputs. This is because the batch norm layers keep track of running stats during training, but these stats are not part of the PEFT checkpoint. Therefore, when you load the PEFT model, the running stats of the base model will be used (i.e. from before training with PEFT).
|
| 351 |
+
|
| 352 |
+
Depending on your use case, this may not be a big deal. If, however, you need your outputs to be 100% reproducible, you can achieve this by adding the batch norm layers to `modules_to_save`. Below is an example of this using resnet and LoRA. Notice that we set `modules_to_save=["classifier", "normalization"]`. We need the `"classifier"` argument because our task is image classification, and we add the `"normalization"` argument to ensure that the batch norm layers are saved in the PEFT checkpoint.
|
| 353 |
+
|
| 354 |
+
```python
|
| 355 |
+
from transformers import AutoModelForImageClassification
|
| 356 |
+
from peft import LoraConfig, get_peft_model
|
| 357 |
+
|
| 358 |
+
model_id = "microsoft/resnet-18"
|
| 359 |
+
base_model = AutoModelForImageClassification.from_pretrained(self.model_id)
|
| 360 |
+
config = LoraConfig(
|
| 361 |
+
target_modules=["convolution"],
|
| 362 |
+
modules_to_save=["classifier", "normalization"],
|
| 363 |
+
),
|
| 364 |
+
```
|
| 365 |
+
|
| 366 |
+
Depending on the type of model you use, the batch norm layers could have different names than `"normalization"`, so please ensure that the name matches your model architecture.
|
| 367 |
+
|
| 368 |
+
## Version mismatch
|
| 369 |
+
|
| 370 |
+
### Error while loading the config because of an unexpected keyword argument
|
| 371 |
+
|
| 372 |
+
When you encounter an error like the one shown below, it means the adapter you're trying to load was trained with a more recent version of PEFT than the version you have installed on your system.
|
| 373 |
+
|
| 374 |
+
```
|
| 375 |
+
TypeError: LoraConfig.__init__() got an unexpected keyword argument <argument-name>
|
| 376 |
+
```
|
| 377 |
+
|
| 378 |
+
The best way to resolve this issue is to install the latest PEFT version:
|
| 379 |
+
|
| 380 |
+
```sh
|
| 381 |
+
python -m pip install -U PEFT
|
| 382 |
+
```
|
| 383 |
+
|
| 384 |
+
If the adapter was trained from a source install of PEFT (an unreleased version of PEFT), then you also need to install PEFT from source.
|
| 385 |
+
|
| 386 |
+
```sh
|
| 387 |
+
python -m pip install -U git+https://github.com/huggingface/peft.git
|
| 388 |
+
```
|
| 389 |
+
|
| 390 |
+
If it is not possible for you to upgrade PEFT, there is a workaround you can try.
|
| 391 |
+
|
| 392 |
+
Assume the error message says that the unknown keyword argument is named `foobar`. Search inside the `adapter_config.json` of this PEFT adapter for the `foobar` entry and delete it from the file. Then save the file and try loading the model again.
|
| 393 |
+
|
| 394 |
+
This solution works most of the time. As long as it is the default value for `foobar`, it can be ignored. However, when it is set to some other value, you will get incorrect results. Upgrading PEFT is the recommended solution.
|
| 395 |
+
|
| 396 |
+
## Adapter handling
|
| 397 |
+
|
| 398 |
+
### Using multiple adapters at the same time
|
| 399 |
+
|
| 400 |
+
PEFT allows you to create more than one adapter on the same model. This can be useful in many situations. For example, for inference, you may want to serve two fine-tuned models from the same base model instead of loading the base model once for each fine-tuned model, which would cost more memory. However, multiple adapters can be activated at the same time. This way, the model may leverage the learnings from all those adapters at the same time. As an example, if you have a diffusion model, you may want to use one LoRA adapter to change the style and a different one to change the subject.
|
| 401 |
+
|
| 402 |
+
Activating multiple adapters at the same time is generally possible on all PEFT methods (LoRA, LoHa, IA³, etc.) except for prompt learning methods (p-tuning, prefix tuning, etc.). The following example illustrates how to achieve this:
|
| 403 |
+
|
| 404 |
+
```python
|
| 405 |
+
from transformers import AutoModelForCausalLM
|
| 406 |
+
from peft import PeftModel
|
| 407 |
+
|
| 408 |
+
model_id = ...
|
| 409 |
+
base_model = AutoModelForCausalLM.from_pretrained(model_id)
|
| 410 |
+
model = PeftModel.from_pretrained(base_model, lora_path_0) # default adapter_name is 'default'
|
| 411 |
+
model.load_adapter(lora_path_1, adapter_name="other")
|
| 412 |
+
# the 'other' adapter was loaded but it's not active yet, so to activate both adapters:
|
| 413 |
+
model.base_model.set_adapter(["default", "other"])
|
| 414 |
+
```
|
| 415 |
+
|
| 416 |
+
> [!TIP]
|
| 417 |
+
> In the example above, you can see that we need to call `model.base_model.set_adapter(["default", "other"])`. Why can we not call `model.set_adapter(["default", "other"])`? This is unfortunately not possible because, as explained earlier, some PEFT methods don't support activating more than one adapter at a time.
|
| 418 |
+
|
| 419 |
+
It is also possible to train two adapters at the same time, but you should be careful to ensure that the weights of both adapters are known to the optimizer. Otherwise, only one adapter will receive updates.
|
| 420 |
+
|
| 421 |
+
```python
|
| 422 |
+
from transformers import AutoModelForCausalLM
|
| 423 |
+
from peft import LoraConfig, get_peft_model
|
| 424 |
+
|
| 425 |
+
model_id = ...
|
| 426 |
+
base_model = AutoModelForCausalLM.from_pretrained(model_id)
|
| 427 |
+
lora_config_0 = LoraConfig(...)
|
| 428 |
+
lora_config_1 = LoraConfig(...)
|
| 429 |
+
model = get_peft_model(base_model, lora_config_0)
|
| 430 |
+
model.add_adapter(adapter_name="other", peft_config=lora_config_1)
|
| 431 |
+
```
|
| 432 |
+
|
| 433 |
+
If we would now call:
|
| 434 |
+
|
| 435 |
+
```python
|
| 436 |
+
from transformers import Trainer
|
| 437 |
+
|
| 438 |
+
trainer = Trainer(model=model, ...)
|
| 439 |
+
trainer.train()
|
| 440 |
+
```
|
| 441 |
+
|
| 442 |
+
or
|
| 443 |
+
|
| 444 |
+
```python
|
| 445 |
+
optimizer = torch.optim.AdamW([param for param in model.parameters() if param.requires_grad], ...)
|
| 446 |
+
```
|
| 447 |
+
|
| 448 |
+
then the second LoRA adapter (`"other"`) would not be trained. This is because it is inactive at this moment, which means the `requires_grad` attribute on its parameters is set to `False` and the optimizer will ignore it. Therefore, make sure to activate all adapters that should be trained _before_ initializing the optimizer:
|
| 449 |
+
|
| 450 |
+
```python
|
| 451 |
+
# activate all adapters
|
| 452 |
+
model.base_model.set_adapter(["default", "other"])
|
| 453 |
+
trainer = Trainer(model=model, ...)
|
| 454 |
+
trainer.train()
|
| 455 |
+
```
|
| 456 |
+
|
| 457 |
+
> [!TIP]
|
| 458 |
+
> This section deals with using multiple adapters _of the same type_ on the same model, for example, using multiple LoRA adapters at the same time. It does not apply to using _different types_ of adapters on the same model, for example one LoRA adapter and one LoHa adapter. For this, please check [`PeftMixedModel`](https://huggingface.co/docs/peft/developer_guides/mixed_models).
|
peft/docs/source/index.md
ADDED
|
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
| 2 |
+
|
| 3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
| 4 |
+
the License. You may obtain a copy of the License at
|
| 5 |
+
|
| 6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 7 |
+
|
| 8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
| 9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
| 10 |
+
specific language governing permissions and limitations under the License.
|
| 11 |
+
|
| 12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
| 13 |
+
rendered properly in your Markdown viewer.
|
| 14 |
+
|
| 15 |
+
-->
|
| 16 |
+
|
| 17 |
+
# PEFT
|
| 18 |
+
|
| 19 |
+
🤗 PEFT (Parameter-Efficient Fine-Tuning) is a library for efficiently adapting large pretrained models to various downstream applications without fine-tuning all of a model's parameters because it is prohibitively costly. PEFT methods only fine-tune a small number of (extra) model parameters - significantly decreasing computational and storage costs - while yielding performance comparable to a fully fine-tuned model. This makes it more accessible to train and store large language models (LLMs) on consumer hardware.
|
| 20 |
+
|
| 21 |
+
PEFT is integrated with the Transformers, Diffusers, and Accelerate libraries to provide a faster and easier way to load, train, and use large models for inference.
|
| 22 |
+
|
| 23 |
+
<div class="mt-10">
|
| 24 |
+
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
|
| 25 |
+
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="quicktour"
|
| 26 |
+
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Quicktour</div>
|
| 27 |
+
<p class="text-gray-700">Start here if you're new to 🤗 PEFT to get an overview of the library's main features, and how to train a model with a PEFT method.</p>
|
| 28 |
+
</a>
|
| 29 |
+
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./task_guides/prompt_based_methods"
|
| 30 |
+
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
|
| 31 |
+
<p class="text-gray-700">Practical guides demonstrating how to apply various PEFT methods across different types of tasks like image classification, causal language modeling, automatic speech recognition, and more. Learn how to use 🤗 PEFT with the DeepSpeed and Fully Sharded Data Parallel scripts.</p>
|
| 32 |
+
</a>
|
| 33 |
+
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual_guides/adapter"
|
| 34 |
+
><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
|
| 35 |
+
<p class="text-gray-700">Get a better theoretical understanding of how LoRA and various soft prompting methods help reduce the number of trainable parameters to make training more efficient.</p>
|
| 36 |
+
</a>
|
| 37 |
+
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/config"
|
| 38 |
+
><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
|
| 39 |
+
<p class="text-gray-700">Technical descriptions of how 🤗 PEFT classes and methods work.</p>
|
| 40 |
+
</a>
|
| 41 |
+
</div>
|
| 42 |
+
</div>
|
| 43 |
+
|
| 44 |
+
<iframe
|
| 45 |
+
src="https://stevhliu-peft-methods.hf.space"
|
| 46 |
+
frameborder="0"
|
| 47 |
+
width="850"
|
| 48 |
+
height="620"
|
| 49 |
+
></iframe>
|