Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
|
| 3 |
+
# Doc / guide: https://huggingface.co/docs/hub/model-cards
|
| 4 |
+
{}
|
| 5 |
+
---
|
| 6 |
+
Encoder only version of the [ANKH base model](https://huggingface.co/ElnaggarLab/ankh-base) ([paper](https://arxiv.org/abs/2301.06568)). The encoder only version is ideal for protein representation tasks.
|
| 7 |
+
|
| 8 |
+
## To download
|
| 9 |
+
```python
|
| 10 |
+
from transformers import T5EncoderModel, AutoTokenizer
|
| 11 |
+
|
| 12 |
+
model_path = 'Synthyra/ANKH_base'
|
| 13 |
+
model = T5EncoderModel.from_pretrained(model_path)
|
| 14 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
| 15 |
+
```
|
| 16 |
+
|
| 17 |
+
We are working on implementing a version of T5 based PLMs with [Flex attention](https://pytorch.org/blog/flexattention/) once learned relative position bias is supported (used in T5). Stay tuned.
|