Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | 
         @@ -120,3 +120,44 @@ Contain various examples, including example generation(s) showing 2, 4, and 8 ex 
     | 
|
| 120 | 
         | 
| 121 | 
         
             
            This will give you a better idea of what changes to expect when adjusting the number of experts
         
     | 
| 122 | 
         
             
            and the effect on generation.
         
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
| 
         | 
|
| 120 | 
         | 
| 121 | 
         
             
            This will give you a better idea of what changes to expect when adjusting the number of experts
         
     | 
| 122 | 
         
             
            and the effect on generation.
         
     | 
| 123 | 
         
            +
             
     | 
| 124 | 
         
            +
            ---
         
     | 
| 125 | 
         
            +
             
     | 
| 126 | 
         
            +
            <h2>Special Thanks:</h2>
         
     | 
| 127 | 
         
            +
             
     | 
| 128 | 
         
            +
            ---
         
     | 
| 129 | 
         
            +
             
     | 
| 130 | 
         
            +
            Special thanks to all the following, and many more...
         
     | 
| 131 | 
         
            +
             
     | 
| 132 | 
         
            +
            All the model makers, fine tuners, mergers, and tweakers:
         
     | 
| 133 | 
         
            +
            - Provides the raw "DNA" for almost all my models.
         
     | 
| 134 | 
         
            +
            - Sources of model(s) can be found on the repo pages, especially the "source" repos with link(s) to the model creator(s).
         
     | 
| 135 | 
         
            +
             
     | 
| 136 | 
         
            +
            Huggingface [ https://huggingface.co ] :
         
     | 
| 137 | 
         
            +
            - The place to store, merge, and tune models endlessly.
         
     | 
| 138 | 
         
            +
            - THE reason we have an open source community.
         
     | 
| 139 | 
         
            +
             
     | 
| 140 | 
         
            +
            LlamaCPP [ https://github.com/ggml-org/llama.cpp ] :
         
     | 
| 141 | 
         
            +
            - The ability to compress and run models on GPU(s), CPU(s) and almost all devices.
         
     | 
| 142 | 
         
            +
            - Imatrix, Quantization, and other tools to tune the quants and the models.
         
     | 
| 143 | 
         
            +
            - Llama-Server : A cli based direct interface to run GGUF models.
         
     | 
| 144 | 
         
            +
            - The only tool I use to quant models.
         
     | 
| 145 | 
         
            +
             
     | 
| 146 | 
         
            +
            Quant-Masters: Team Mradermacher, Bartowski, and many others:
         
     | 
| 147 | 
         
            +
            - Quant models day and night for us all to use.
         
     | 
| 148 | 
         
            +
            - They are the lifeblood of open source access.
         
     | 
| 149 | 
         
            +
             
     | 
| 150 | 
         
            +
            MergeKit [ https://github.com/arcee-ai/mergekit ] :
         
     | 
| 151 | 
         
            +
            - The universal online/offline tool to merge models together and forge something new.
         
     | 
| 152 | 
         
            +
            - Over 20 methods to almost instantly merge model, pull them apart and put them together again.
         
     | 
| 153 | 
         
            +
            - The tool I have used to create over 1500 models.
         
     | 
| 154 | 
         
            +
             
     | 
| 155 | 
         
            +
            Lmstudio [ https://lmstudio.ai/ ] :
         
     | 
| 156 | 
         
            +
            - The go to tool to test and run models in GGUF format.
         
     | 
| 157 | 
         
            +
            - The Tool I use to test/refine and evaluate new models.
         
     | 
| 158 | 
         
            +
            - LMStudio forum on discord; endless info and community for open source.
         
     | 
| 159 | 
         
            +
             
     | 
| 160 | 
         
            +
            Text Generation Webui // KolboldCPP // SillyTavern:
         
     | 
| 161 | 
         
            +
            - Excellent tools to run GGUF models with - [  https://github.com/oobabooga/text-generation-webui ] [ https://github.com/LostRuins/koboldcpp ] .
         
     | 
| 162 | 
         
            +
            - Sillytavern [ https://github.com/SillyTavern/SillyTavern ] can be used with LMSTudio [ https://lmstudio.ai/ ] , TextGen [ https://github.com/oobabooga/text-generation-webui ], Kolboldcpp [ https://github.com/LostRuins/koboldcpp ], Llama-Server [part of LLAMAcpp] as a off the scale front end control system and interface to work with models.
         
     | 
| 163 | 
         
            +
             
     |