Update README.md
Browse files
README.md
CHANGED
|
@@ -12,12 +12,12 @@ tags:
|
|
| 12 |
- art
|
| 13 |
---
|
| 14 |
|
| 15 |
-
# Model Card for
|
| 16 |
|
| 17 |

|
| 18 |
|
| 19 |
-
|
| 20 |
-
|
| 21 |
This model is supported by [GENIAC](https://www.meti.go.jp/english/policy/mono_info_service/geniac/index.html) (NEDO, METI).
|
| 22 |
|
| 23 |
## Model Details
|
|
@@ -27,9 +27,9 @@ This model is supported by [GENIAC](https://www.meti.go.jp/english/policy/mono_i
|
|
| 27 |
At AIdeaLab, we develop AI technology through active dialogue with creators, aiming for mutual understanding and cooperation.
|
| 28 |
We strive to solve challenges faced by creators and grow together.
|
| 29 |
One of these challenges is that some creators and fans want to use video generation but can't, likely due to the lack of permission to use certain videos for training.
|
| 30 |
-
To address this issue, we have developed
|
| 31 |
|
| 32 |
-
#### Features of
|
| 33 |
|
| 34 |
- Principally uses images with obtained learning permissions
|
| 35 |
- Understands both Japanese and English text inputs directly
|
|
@@ -116,7 +116,7 @@ null_prompt_embeds = null_prompt_embeds.to(dtype=torch_dtype, device=device)
|
|
| 116 |
del text_encoder
|
| 117 |
|
| 118 |
transformer = CogVideoXTransformer3DModel.from_pretrained(
|
| 119 |
-
"aidealab/
|
| 120 |
torch_dtype=torch_dtype
|
| 121 |
)
|
| 122 |
transformer=transformer.to(device)
|
|
|
|
| 12 |
- art
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Model Card for AIdeaLab-VideoJP
|
| 16 |
|
| 17 |

|
| 18 |
|
| 19 |
+
AIdeaLab VideoJP is a text-to-video model learning from CC-BY, CC-0 like images.
|
| 20 |
+
AIdeaLab VideoJP is made in Japan.
|
| 21 |
This model is supported by [GENIAC](https://www.meti.go.jp/english/policy/mono_info_service/geniac/index.html) (NEDO, METI).
|
| 22 |
|
| 23 |
## Model Details
|
|
|
|
| 27 |
At AIdeaLab, we develop AI technology through active dialogue with creators, aiming for mutual understanding and cooperation.
|
| 28 |
We strive to solve challenges faced by creators and grow together.
|
| 29 |
One of these challenges is that some creators and fans want to use video generation but can't, likely due to the lack of permission to use certain videos for training.
|
| 30 |
+
To address this issue, we have developed AIdeaLab VideoJP.
|
| 31 |
|
| 32 |
+
#### Features of AIdeaLab-VideoJP
|
| 33 |
|
| 34 |
- Principally uses images with obtained learning permissions
|
| 35 |
- Understands both Japanese and English text inputs directly
|
|
|
|
| 116 |
del text_encoder
|
| 117 |
|
| 118 |
transformer = CogVideoXTransformer3DModel.from_pretrained(
|
| 119 |
+
"aidealab/AIdeaLab-VideoJP",
|
| 120 |
torch_dtype=torch_dtype
|
| 121 |
)
|
| 122 |
transformer=transformer.to(device)
|