Update README.md
Browse files
README.md
CHANGED
|
@@ -10,19 +10,14 @@ datasets:
|
|
| 10 |
pipeline_tag: visual-question-answering
|
| 11 |
---
|
| 12 |
|
| 13 |
-
|
| 14 |
# Model Card for InternVL-Chat-V1.1
|
| 15 |
-
|
| 16 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/4IG0h_KJ2cvpp9Kdm0Jf7.webp" alt="Image Description" width="300" height="300">
|
|
|
|
| 17 |
|
| 18 |
\[[Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\] \[[Chat Demo](https://internvl.opengvlab.com/)\] \[[中文解读](https://zhuanlan.zhihu.com/p/675877376)]
|
| 19 |
|
| 20 |
-
|
| 21 |
-
| ----------------------- | ---------- | --------------------------------------------------------------------------- | ---------------------------------- |
|
| 22 |
-
| InternVL-Chat-V1.5 | 2024.04.18 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5) | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (🔥new)|
|
| 23 |
-
| InternVL-Chat-V1.2-Plus | 2024.02.21 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus) | more SFT data and stronger |
|
| 24 |
-
| InternVL-Chat-V1.2 | 2024.02.11 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) | scaling up LLM to 34B |
|
| 25 |
-
| InternVL-Chat-V1.1 | 2024.01.24 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1) | support Chinese and stronger OCR |
|
| 26 |
|
| 27 |
## Model Details
|
| 28 |
- **Model Type:** multimodal large language model (MLLM)
|
|
@@ -40,6 +35,27 @@ pipeline_tag: visual-question-answering
|
|
| 40 |
- Learnable Component: MLP + LLaMA2-13B
|
| 41 |
- Data: A comprehensive collection of open-source datasets, along with their Chinese translation versions, totaling approximately 6M samples.
|
| 42 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
## Model Usage
|
| 45 |
|
|
|
|
| 10 |
pipeline_tag: visual-question-answering
|
| 11 |
---
|
| 12 |
|
|
|
|
| 13 |
# Model Card for InternVL-Chat-V1.1
|
| 14 |
+
<p align="center">
|
| 15 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/4IG0h_KJ2cvpp9Kdm0Jf7.webp" alt="Image Description" width="300" height="300">
|
| 16 |
+
</p>
|
| 17 |
|
| 18 |
\[[Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\] \[[Chat Demo](https://internvl.opengvlab.com/)\] \[[中文解读](https://zhuanlan.zhihu.com/p/675877376)]
|
| 19 |
|
| 20 |
+
We released InternVL-Chat-V1.1, featuring a structure similar to LLaVA, including a ViT, an MLP projector, and an LLM. In this version, we explored increasing the resolution to 448x448, enhancing OCR capabilities, and improving support for Chinese conversations.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
## Model Details
|
| 23 |
- **Model Type:** multimodal large language model (MLLM)
|
|
|
|
| 35 |
- Learnable Component: MLP + LLaMA2-13B
|
| 36 |
- Data: A comprehensive collection of open-source datasets, along with their Chinese translation versions, totaling approximately 6M samples.
|
| 37 |
|
| 38 |
+
## Released Models
|
| 39 |
+
|
| 40 |
+
### Vision Foundation model
|
| 41 |
+
| Model | Date | Download | Note |
|
| 42 |
+
| ----------------------- | ---------- | ---------------------------------------------------------------------- | -------------------------------- |
|
| 43 |
+
| InternViT-6B-448px-V1.5 | 2024.04.20 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | support dynamic resolution, super strong OCR (🔥new) |
|
| 44 |
+
| InternViT-6B-448px-V1.2 | 2024.02.11 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2) | 448 resolution |
|
| 45 |
+
| InternViT-6B-448px-V1.0 | 2024.01.30 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0) | 448 resolution |
|
| 46 |
+
| InternViT-6B-224px | 2023.12.22 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-224px) | vision foundation model |
|
| 47 |
+
| InternVL-14B-224px | 2023.12.22 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-14B-224px) | vision-language foundation model |
|
| 48 |
+
|
| 49 |
+
### Multimodal Large Language Model (MLLM)
|
| 50 |
+
| Model | Date | Download | Note |
|
| 51 |
+
| ----------------------- | ---------- | --------------------------------------------------------------------------- | ---------------------------------- |
|
| 52 |
+
| InternVL-Chat-V1.5 | 2024.04.18 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5) | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (🔥new)|
|
| 53 |
+
| InternVL-Chat-V1.2-Plus | 2024.02.21 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus) | more SFT data and stronger |
|
| 54 |
+
| InternVL-Chat-V1.2 | 2024.02.11 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) | scaling up LLM to 34B |
|
| 55 |
+
| InternVL-Chat-V1.1 | 2024.01.24 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1) | support Chinese and stronger OCR |
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
|
| 59 |
|
| 60 |
## Model Usage
|
| 61 |
|