Spaces:
Sleeping
Sleeping
| # YOLOv8 - TFLite Runtime | |
| This example shows how to run inference with YOLOv8 TFLite model. It supports FP32, FP16 and INT8 models. | |
| ## Installation | |
| ### Installing `tflite-runtime` | |
| To load TFLite models, install the `tflite-runtime` package using: | |
| ```bash | |
| pip install tflite-runtime | |
| ``` | |
| ### Installing `tensorflow-gpu` (For NVIDIA GPU Users) | |
| Leverage GPU acceleration with NVIDIA GPUs by installing `tensorflow-gpu`: | |
| ```bash | |
| pip install tensorflow-gpu | |
| ``` | |
| **Note:** Ensure you have compatible GPU drivers installed on your system. | |
| ### Installing `tensorflow` (CPU Version) | |
| For CPU usage or non-NVIDIA GPUs, install TensorFlow with: | |
| ```bash | |
| pip install tensorflow | |
| ``` | |
| ## Usage | |
| Follow these instructions to run YOLOv8 after successful installation. | |
| Convert the YOLOv8 model to TFLite format: | |
| ```bash | |
| yolo export model=yolov8n.pt imgsz=640 format=tflite int8 | |
| ``` | |
| Locate the TFLite model in `yolov8n_saved_model`. Then, execute the following in your terminal: | |
| ```bash | |
| python main.py --model yolov8n_full_integer_quant.tflite --img image.jpg --conf 0.25 --iou 0.45 --metadata "metadata.yaml" | |
| ``` | |
| Replace `best_full_integer_quant.tflite` with the TFLite model path, `image.jpg` with the input image path, `metadata.yaml` with the one generated by `ultralytics` during export, and adjust the confidence (conf) and IoU thresholds (iou) as necessary. | |
| ### Output | |
| The output would show the detections along with the class labels and confidences of each detected object. | |
|  | |