Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,13 +7,13 @@ sdk: static
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
-
#
|
| 11 |
|
| 12 |
Neural Magic helps developers in accelerating deep learning performance using automated model compression technologies and inference engines.
|
| 13 |
Download our compression-aware inference engines and open source tools for fast model inference.
|
| 14 |
* [nm-vllm](https://neuralmagic.com/nm-vllm/): A high-throughput and memory-efficient inference engine for LLMs, our supported enterprise distribution of [vLLM](https://github.com/vllm-project/vllm).
|
| 15 |
-
* [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering accelerated performance on CPUs and APIs to integrate ML into your application
|
| 16 |
* [llm-compressor](https://github.com/vllm-project/llm-compressor/): HF-compatible library for applying various quantization and sparsity algorithms to llms for optimized deployment with vLLM
|
|
|
|
| 17 |
|
| 18 |

|
| 19 |
|
|
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
+
# The Future of AI is Open
|
| 11 |
|
| 12 |
Neural Magic helps developers in accelerating deep learning performance using automated model compression technologies and inference engines.
|
| 13 |
Download our compression-aware inference engines and open source tools for fast model inference.
|
| 14 |
* [nm-vllm](https://neuralmagic.com/nm-vllm/): A high-throughput and memory-efficient inference engine for LLMs, our supported enterprise distribution of [vLLM](https://github.com/vllm-project/vllm).
|
|
|
|
| 15 |
* [llm-compressor](https://github.com/vllm-project/llm-compressor/): HF-compatible library for applying various quantization and sparsity algorithms to llms for optimized deployment with vLLM
|
| 16 |
+
* [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering accelerated performance on CPUs and APIs to integrate ML into your application
|
| 17 |
|
| 18 |

|
| 19 |
|