Upload folder using huggingface_hub
Browse files- LICENSE.md +55 -0
- README.md +270 -0
    	
        LICENSE.md
    ADDED
    
    | @@ -0,0 +1,55 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            # LICENSE
         | 
| 2 | 
            +
             | 
| 3 | 
            +
            ## 1. Model & License Summary
         | 
| 4 | 
            +
             | 
| 5 | 
            +
            This repository contains **JAM** (the "Model"), the first open-sourced model released under **Project Jamify**, developed for facilitating academic research and creative exploration in AI-generated songs from lyrics. The Model is subject to:
         | 
| 6 | 
            +
             | 
| 7 | 
            +
            1. The **Project Jamify License**: Intended **solely for non-commercial, academic, and entertainment purposes**
         | 
| 8 | 
            +
            2. The **Stability AI Community License Agreement**, provided in the file ```STABILITY_AI_COMMUNITY_LICENSE.md```.  
         | 
| 9 | 
            +
             | 
| 10 | 
            +
            By using or distributing this Model, you **agree** to adhere to all applicable licenses and restrictions, as summarized below.
         | 
| 11 | 
            +
             | 
| 12 | 
            +
            ---
         | 
| 13 | 
            +
             | 
| 14 | 
            +
            ## 2. Project Jamify License Terms
         | 
| 15 | 
            +
             | 
| 16 | 
            +
            **JAM** is developed with the primary objective of facilitating academic research and creative exploration in AI-generated songs from lyrics. We emphasize the following:
         | 
| 17 | 
            +
             | 
| 18 | 
            +
            - **No copyrighted material** was used in a way that would intentionally infringe on intellectual property rights. JAM is not designed to reproduce or imitate any specific artist, label, or protected work.
         | 
| 19 | 
            +
            - Outputs generated by JAM must **not be used to create or disseminate content that violates copyright laws**.
         | 
| 20 | 
            +
            - The **commercial use of JAM or its outputs is strictly prohibited**.
         | 
| 21 | 
            +
            - Responsibility for the use of the model and its outputs lies entirely with the end user, who must ensure all uses comply with applicable legal and ethical standards.
         | 
| 22 | 
            +
             | 
| 23 | 
            +
            ---
         | 
| 24 | 
            +
             | 
| 25 | 
            +
            ## 3. Stability AI Community License Requirements
         | 
| 26 | 
            +
             | 
| 27 | 
            +
            - You must comply with the **Stability AI Community License Agreement** (the "Agreement") for any usage, distribution, or modification of this Model.
         | 
| 28 | 
            +
            - **Non-Commercial Use**: This Model is for research and academic purposes only. Any commercial usage requires registering with Stability AI or obtaining a separate commercial license.
         | 
| 29 | 
            +
            - **Attribution & Notice**:  
         | 
| 30 | 
            +
              - Retain the notice:  
         | 
| 31 | 
            +
                ```
         | 
| 32 | 
            +
                This Stability AI Model is licensed under the Stability AI Community License, Copyright Β© Stability AI Ltd. All Rights Reserved.
         | 
| 33 | 
            +
                ```
         | 
| 34 | 
            +
              - Clearly display "Powered by Stability AI" if you build upon or showcase this Model.
         | 
| 35 | 
            +
            - **Disclaimer & Liability**: This Model is provided **"AS IS"** with **no warranties**. Neither we nor Stability AI will be liable for any claim or damages related to Model use.
         | 
| 36 | 
            +
             | 
| 37 | 
            +
            See ```STABILITY_AI_COMMUNITY_LICENSE.md``` for the full text.
         | 
| 38 | 
            +
             | 
| 39 | 
            +
            ---
         | 
| 40 | 
            +
             | 
| 41 | 
            +
            ## 4. UK Data Copyright Exemption
         | 
| 42 | 
            +
             | 
| 43 | 
            +
            This Model was developed under the **UK data copyright exemption for non-commercial research**. Distribution or use outside these bounds must **not** violate that exemption or infringe on any underlying dataset's license.
         | 
| 44 | 
            +
             | 
| 45 | 
            +
            ---
         | 
| 46 | 
            +
             | 
| 47 | 
            +
            ## 5. Further Information
         | 
| 48 | 
            +
             | 
| 49 | 
            +
            - **Stability AI License Terms**: <https://stability.ai/community-license>  
         | 
| 50 | 
            +
             | 
| 51 | 
            +
            For questions, concerns, or collaboration inquiries, please contact the Project Jamify team via the official repository or project website.
         | 
| 52 | 
            +
             | 
| 53 | 
            +
            ---
         | 
| 54 | 
            +
             | 
| 55 | 
            +
            **End of License**.
         | 
    	
        README.md
    ADDED
    
    | @@ -0,0 +1,270 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            # JAM: A Tiny Flow-based Song Generator with Fine-grained Controllability and Aesthetic Alignment
         | 
| 2 | 
            +
             | 
| 3 | 
            +
            JAM is a rectified flow-based model for lyrics-to-song generation that addresses the lack of fine-grained word-level controllability in existing lyrics-to-song models. Built on a compact 530M-parameter architecture with 16 LLaMA-style Transformer layers as the Diffusion Transformer (DiT) backbone, JAM enables precise vocal control that musicians desire in their workflows. Unlike previous models, JAM provides word and phoneme-level timing control, allowing musicians to specify the exact placement of each vocal sound for improved rhythmic flexibility and expressive timing.
         | 
| 4 | 
            +
             | 
| 5 | 
            +
            ## Features
         | 
| 6 | 
            +
             | 
| 7 | 
            +
            - **Fine-grained Word and Phoneme-level Timing Control**: The first model to provide word-level timing and duration control in song generation, enabling precise prosody control for musicians
         | 
| 8 | 
            +
            - **Compact 530M Parameter Architecture**: Less than half the size of existing models, enabling faster inference with reduced resource requirements
         | 
| 9 | 
            +
            - **Enhanced Lyric Fidelity**: Achieves over 3Γ reduction in Word Error Rate (WER) and Phoneme Error Rate (PER) compared to prior work through precise phoneme boundary attention
         | 
| 10 | 
            +
            - **Global Duration Control**: Controllable duration up to 3 minutes and 50 seconds.
         | 
| 11 | 
            +
            - **Aesthetic Alignment through Direct Preference Optimization**: Iterative refinement using synthetic preference datasets to better align with human aesthetic preferences, eliminating manual annotation requirements
         | 
| 12 | 
            +
             | 
| 13 | 
            +
            ## JAM Samples
         | 
| 14 | 
            +
             | 
| 15 | 
            +
            Check out the example generated music in the `generated_examples/` folder to hear what JAM can produce:
         | 
| 16 | 
            +
             | 
| 17 | 
            +
            - **`Hybrid Minds, Brodie - Heroin.mp3`** - Electronic music with synthesized beats and electronic elements
         | 
| 18 | 
            +
            - **`Jade Bird - Avalanche.mp3`** - Country music with acoustic guitar and folk influences  
         | 
| 19 | 
            +
            - **`Rizzle Kicks, Rachel Chinouriri - Follow Excitement!.mp3`** - Rap music with rhythmic beats and hip-hop style
         | 
| 20 | 
            +
             | 
| 21 | 
            +
            These samples demonstrate JAM's ability to generate high-quality music across different genres while maintaining vocal intelligence, style consistency and musical coherence.
         | 
| 22 | 
            +
             | 
| 23 | 
            +
            ## Requirements
         | 
| 24 | 
            +
             | 
| 25 | 
            +
            - Python 3.10 or higher
         | 
| 26 | 
            +
            - CUDA-compatible GPU with sufficient VRAM (8GB+ recommended)
         | 
| 27 | 
            +
             | 
| 28 | 
            +
             | 
| 29 | 
            +
            ## Installation
         | 
| 30 | 
            +
             | 
| 31 | 
            +
            ### 1. Clone the Repository
         | 
| 32 | 
            +
             | 
| 33 | 
            +
            ```bash
         | 
| 34 | 
            +
            git clone <repository-url>
         | 
| 35 | 
            +
            cd jam
         | 
| 36 | 
            +
            ```
         | 
| 37 | 
            +
             | 
| 38 | 
            +
            ### 2. Run Installation Script
         | 
| 39 | 
            +
             | 
| 40 | 
            +
            The project includes an automated installation script, run it in your own virtual environment:
         | 
| 41 | 
            +
             | 
| 42 | 
            +
            ```bash
         | 
| 43 | 
            +
            bash install.sh
         | 
| 44 | 
            +
            ```
         | 
| 45 | 
            +
             | 
| 46 | 
            +
            This script will:
         | 
| 47 | 
            +
            - Initialize and update git submodules (DeepPhonemizer)
         | 
| 48 | 
            +
            - Install Python dependencies from `requirements.txt`
         | 
| 49 | 
            +
            - Install the JAM package in editable mode
         | 
| 50 | 
            +
            - Install the DeepPhonemizer external dependency
         | 
| 51 | 
            +
             | 
| 52 | 
            +
            ### 3. Manual Installation (Alternative)
         | 
| 53 | 
            +
             | 
| 54 | 
            +
            If you prefer manual installation:
         | 
| 55 | 
            +
             | 
| 56 | 
            +
            ```bash
         | 
| 57 | 
            +
            # Initialize submodules
         | 
| 58 | 
            +
            git submodule update --init --recursive
         | 
| 59 | 
            +
             | 
| 60 | 
            +
            # Install dependencies
         | 
| 61 | 
            +
            pip install -r requirements.txt
         | 
| 62 | 
            +
             | 
| 63 | 
            +
            # Install JAM package
         | 
| 64 | 
            +
            pip install -e .
         | 
| 65 | 
            +
             | 
| 66 | 
            +
            # Install DeepPhonemizer
         | 
| 67 | 
            +
            pip install -e externals/DeepPhonemizer
         | 
| 68 | 
            +
            ```
         | 
| 69 | 
            +
             | 
| 70 | 
            +
            ## Quick Start
         | 
| 71 | 
            +
             | 
| 72 | 
            +
            ### Simple Inference
         | 
| 73 | 
            +
             | 
| 74 | 
            +
            The easiest way to run inference is using the provided `inference.py` script:
         | 
| 75 | 
            +
             | 
| 76 | 
            +
            ```python
         | 
| 77 | 
            +
            python inference.py
         | 
| 78 | 
            +
            ```
         | 
| 79 | 
            +
             | 
| 80 | 
            +
            This script will:
         | 
| 81 | 
            +
            1. Download the pre-trained JAM-0.5 model from Hugging Face
         | 
| 82 | 
            +
            2. Run inference with default settings
         | 
| 83 | 
            +
            3. Save generated audio to the `outputs` directory
         | 
| 84 | 
            +
             | 
| 85 | 
            +
            ### Input Format
         | 
| 86 | 
            +
             | 
| 87 | 
            +
            Create an input file at `inputs/input.json` with your songs:
         | 
| 88 | 
            +
             | 
| 89 | 
            +
            ```json
         | 
| 90 | 
            +
            [
         | 
| 91 | 
            +
              {
         | 
| 92 | 
            +
                "id": "my_song",
         | 
| 93 | 
            +
                "audio_path": "inputs/reference_audio.mp3",
         | 
| 94 | 
            +
                "lrc_path": "inputs/lyrics.json", 
         | 
| 95 | 
            +
                "duration": 180.0,
         | 
| 96 | 
            +
                "prompt_path": "inputs/style_prompt.txt"
         | 
| 97 | 
            +
              }
         | 
| 98 | 
            +
            ]
         | 
| 99 | 
            +
            ```
         | 
| 100 | 
            +
             | 
| 101 | 
            +
            Required files:
         | 
| 102 | 
            +
            - **Audio file**: Reference audio for style extraction
         | 
| 103 | 
            +
            - **Lyrics file**: JSON with timestamped lyrics
         | 
| 104 | 
            +
            - **Prompt file**: Text description of desired style/genre. Text prompt is not used in the default setting where the audio reference is utilized.
         | 
| 105 | 
            +
             | 
| 106 | 
            +
            ## Advanced Usage
         | 
| 107 | 
            +
             | 
| 108 | 
            +
            ### Using `python -m jam.infer`
         | 
| 109 | 
            +
             | 
| 110 | 
            +
            For more control over the generation process:
         | 
| 111 | 
            +
             | 
| 112 | 
            +
            ```bash
         | 
| 113 | 
            +
            # Basic usage with custom checkpoint
         | 
| 114 | 
            +
            python -m jam.infer evaluation.checkpoint_path=path/to/model.safetensors
         | 
| 115 | 
            +
             | 
| 116 | 
            +
            # With custom output directory
         | 
| 117 | 
            +
            python -m jam.infer evaluation.checkpoint_path=path/to/model.safetensors evaluation.output_dir=my_outputs
         | 
| 118 | 
            +
             | 
| 119 | 
            +
            # With custom configuration file
         | 
| 120 | 
            +
            python -m jam.infer config=configs/my_config.yaml evaluation.checkpoint_path=path/to/model.safetensors
         | 
| 121 | 
            +
            ```
         | 
| 122 | 
            +
             | 
| 123 | 
            +
            ### Multi-GPU Inference
         | 
| 124 | 
            +
             | 
| 125 | 
            +
            Use Accelerate for distributed inference:
         | 
| 126 | 
            +
             | 
| 127 | 
            +
            ```bash
         | 
| 128 | 
            +
            # Basic usage with custom checkpoint
         | 
| 129 | 
            +
            accelerate launch --config_path path/to/accelerate/config.yaml jam.infer
         | 
| 130 | 
            +
             | 
| 131 | 
            +
            # With custom configuration file
         | 
| 132 | 
            +
            accelerate launch --config_path path/to/accelerate/config.yaml jam.infer config=path/to/inference/config.yaml
         | 
| 133 | 
            +
            ```
         | 
| 134 | 
            +
             | 
| 135 | 
            +
            ## Configuration Options
         | 
| 136 | 
            +
             | 
| 137 | 
            +
            ### Key Parameters
         | 
| 138 | 
            +
             | 
| 139 | 
            +
            #### Evaluation Settings
         | 
| 140 | 
            +
            - `evaluation.checkpoint_path`: Path to model checkpoint (required)
         | 
| 141 | 
            +
            - `evaluation.output_dir`: Output directory (default: "outputs")
         | 
| 142 | 
            +
            - `evaluation.test_set_path`: Input JSON file (default: "inputs/input.json")
         | 
| 143 | 
            +
            - `evaluation.batch_size`: Batch size for inference (default: 1)
         | 
| 144 | 
            +
            - `evaluation.num_samples`: Only generate first n samples in test_set_path (null = all)
         | 
| 145 | 
            +
            - `evaluation.vae_type`: VAE model type ("diffrhythm" or "stable_audio")
         | 
| 146 | 
            +
             | 
| 147 | 
            +
            #### Style Control
         | 
| 148 | 
            +
            - `evaluation.ignore_style`: Ignore style prompts (default: false)
         | 
| 149 | 
            +
            - `evaluation.use_prompt_style`: Use text prompts for style (default: false)
         | 
| 150 | 
            +
            - `evaluation.num_style_secs`: Style audio duration in seconds (default: 30)
         | 
| 151 | 
            +
            - `evaluation.random_crop_style`: Randomly crop style audio (default: false)
         | 
| 152 | 
            +
             | 
| 153 | 
            +
            ## Input File Formats
         | 
| 154 | 
            +
             | 
| 155 | 
            +
            ### Lyrics File (`*.json`)
         | 
| 156 | 
            +
            ```json
         | 
| 157 | 
            +
            [
         | 
| 158 | 
            +
                {"start": 2.2, "end": 2.5, "word": "First word of lyrics"},
         | 
| 159 | 
            +
                {"start": 2.5, "end": 3.7, "word": "Second word of lyrics"},
         | 
| 160 | 
            +
                {"more lines ...."}
         | 
| 161 | 
            +
            ]
         | 
| 162 | 
            +
            ```
         | 
| 163 | 
            +
             | 
| 164 | 
            +
            ### Style Prompt File (`*.txt`)
         | 
| 165 | 
            +
            ```
         | 
| 166 | 
            +
            Electronic dance music with heavy bass and synthesizers
         | 
| 167 | 
            +
            ```
         | 
| 168 | 
            +
             | 
| 169 | 
            +
            ### Input Manifest (`input.json`)
         | 
| 170 | 
            +
            ```json
         | 
| 171 | 
            +
            [
         | 
| 172 | 
            +
              {
         | 
| 173 | 
            +
                "id": "unique_song_id",
         | 
| 174 | 
            +
                "audio_path": "path/to/reference.mp3",
         | 
| 175 | 
            +
                "lrc_path": "path/to/lyrics.json",
         | 
| 176 | 
            +
                "duration": 180.0,
         | 
| 177 | 
            +
                "prompt_path": "path/to/style.txt"
         | 
| 178 | 
            +
              }
         | 
| 179 | 
            +
            ]
         | 
| 180 | 
            +
            ```
         | 
| 181 | 
            +
             | 
| 182 | 
            +
            ## Output Structure
         | 
| 183 | 
            +
             | 
| 184 | 
            +
            Generated files are saved to the output directory:
         | 
| 185 | 
            +
             | 
| 186 | 
            +
            ```
         | 
| 187 | 
            +
            outputs/
         | 
| 188 | 
            +
            βββ generated/          # Final trimmed audio files
         | 
| 189 | 
            +
            βββ generated_orig/     # Original generated audio
         | 
| 190 | 
            +
            βββ cfm_latents/       # Intermediate latent representations
         | 
| 191 | 
            +
            βββ local_files/       # Process-specific metadata
         | 
| 192 | 
            +
            βββ generation_config.yaml  # Configuration used for generation
         | 
| 193 | 
            +
            ```
         | 
| 194 | 
            +
             | 
| 195 | 
            +
            ## Performance Tips
         | 
| 196 | 
            +
             | 
| 197 | 
            +
            1. **GPU Memory**: Use `evaluation.batch_size=1` for large on limited VRAM
         | 
| 198 | 
            +
            2. **Multi-GPU**: Use `accelerate launch` for faster processing of multiple samples
         | 
| 199 | 
            +
            3. **Mixed Precision**: Add `--mixed_precision=fp16` to reduce memory usage
         | 
| 200 | 
            +
             | 
| 201 | 
            +
            ## Troubleshooting
         | 
| 202 | 
            +
             | 
| 203 | 
            +
            ### Common Issues
         | 
| 204 | 
            +
             | 
| 205 | 
            +
            #### "Checkpoint path not found"
         | 
| 206 | 
            +
            ```bash
         | 
| 207 | 
            +
            # Make sure to specify the checkpoint path
         | 
| 208 | 
            +
            python -m jam.infer evaluation.checkpoint_path=path/to/your/model.safetensors
         | 
| 209 | 
            +
            ```
         | 
| 210 | 
            +
             | 
| 211 | 
            +
            #### "CUDA out of memory"
         | 
| 212 | 
            +
            ```bash
         | 
| 213 | 
            +
            # Reduce batch size or use mixed precision
         | 
| 214 | 
            +
            accelerate launch --mixed_precision=fp16 -m jam.infer evaluation.checkpoint_path=model.safetensors
         | 
| 215 | 
            +
            ```
         | 
| 216 | 
            +
             | 
| 217 | 
            +
            #### "Test set not found"
         | 
| 218 | 
            +
            ```bash
         | 
| 219 | 
            +
            # Create input.json file in inputs/ directory or specify custom path
         | 
| 220 | 
            +
            python -m jam.infer evaluation.test_set_path=path/to/your/input.json evaluation.checkpoint_path=model.safetensors
         | 
| 221 | 
            +
            ```
         | 
| 222 | 
            +
             | 
| 223 | 
            +
            ## Model Downloads
         | 
| 224 | 
            +
             | 
| 225 | 
            +
            The `inference.py` script automatically downloads the JAM-0.5 model. For manual download:
         | 
| 226 | 
            +
             | 
| 227 | 
            +
            ```python
         | 
| 228 | 
            +
            from huggingface_hub import snapshot_download
         | 
| 229 | 
            +
            model_path = snapshot_download(repo_id="declare-lab/jam-0.5")
         | 
| 230 | 
            +
            ```
         | 
| 231 | 
            +
             | 
| 232 | 
            +
            ## Citation
         | 
| 233 | 
            +
             | 
| 234 | 
            +
            If you use JAM in your research, please cite:
         | 
| 235 | 
            +
             | 
| 236 | 
            +
            ```bibtex
         | 
| 237 | 
            +
            @misc{jam2024,
         | 
| 238 | 
            +
              title={JAM: A Tiny Flow-based Song Generator with Fine-grained Controllability and Aesthetic Alignment},
         | 
| 239 | 
            +
              author={Renhang Liu and Chia-Yu Hung and Navonil Majumder and Taylor Gautreaux and Amir Ali Bagherzadeh and Chuan Li and Dorien Herremans and Soujanya Poria},
         | 
| 240 | 
            +
              year={2025}
         | 
| 241 | 
            +
            }
         | 
| 242 | 
            +
            ```
         | 
| 243 | 
            +
             | 
| 244 | 
            +
            ## License
         | 
| 245 | 
            +
             | 
| 246 | 
            +
            **JAM** is the first open-sourced model released under **Project Jamify**, developed for facilitating academic research and creative exploration in AI-generated songs from lyrics. The model is subject to:
         | 
| 247 | 
            +
             | 
| 248 | 
            +
            1. **Project Jamify License**: Intended **solely for non-commercial, academic, and entertainment purposes**
         | 
| 249 | 
            +
            2. **Stability AI Community License Agreement**: Required due to use of Stability AI model components
         | 
| 250 | 
            +
             | 
| 251 | 
            +
            ### Key Restrictions:
         | 
| 252 | 
            +
            - **No copyrighted material** was used in a way that would intentionally infringe on intellectual property rights
         | 
| 253 | 
            +
            - **JAM is not designed** to reproduce or imitate any specific artist, label, or protected work
         | 
| 254 | 
            +
            - Outputs generated by JAM must **not be used to create or disseminate content that violates copyright laws**
         | 
| 255 | 
            +
            - **Commercial use of JAM or its outputs is strictly prohibited**
         | 
| 256 | 
            +
            - **Attribution Required**: Must retain "This Stability AI Model is licensed under the Stability AI Community License, Copyright Β© Stability AI Ltd. All Rights Reserved."
         | 
| 257 | 
            +
             | 
| 258 | 
            +
            ### Responsibility
         | 
| 259 | 
            +
            Responsibility for the use of the model and its outputs lies entirely with the end user, who must ensure all uses comply with applicable legal and ethical standards.
         | 
| 260 | 
            +
             | 
| 261 | 
            +
            For complete license terms, see [LICENSE.md](LICENSE.md) and [STABILITY_AI_COMMUNITY_LICENSE.md](STABILITY_AI_COMMUNITY_LICENSE.md).
         | 
| 262 | 
            +
             | 
| 263 | 
            +
            For questions, concerns, or collaboration inquiries, please contact the Project Jamify team via the official repository.
         | 
| 264 | 
            +
             | 
| 265 | 
            +
            ## Support
         | 
| 266 | 
            +
             | 
| 267 | 
            +
            For issues and questions:
         | 
| 268 | 
            +
            - Open an issue on GitHub
         | 
| 269 | 
            +
            - Check the troubleshooting section above
         | 
| 270 | 
            +
            - Review the configuration options for parameter tuning 
         | 
