LLM Name | Mpt 30B Peft Compatible |
Repository | Open on ๐ค |
Model Size | 30b |
Required VRAM | 60.1 GB |
Updated | 2024-07-27 |
Maintainer | eluzhnica |
Model Type | mpt |
Model Files | |
Model Architecture | MPTForCausalLM |
License | apache-2.0 |
Model Max Length | 8192 |
Transformers Version | 4.28.1 |
Tokenizer Class | GPTNeoXTokenizer |
Vocabulary Size | 50432 |
Torch Data Type | bfloat16 |
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Mpt 30B Chat | 0.3 | 0K / 60.1 GB | 2079 | 199 |
Mpt 30B | 0.2 | 0K / 60.1 GB | 16672 | 341 |
Mpt 30B Instruct | 0.2 | 0K / 60.1 GB | 6047 | 100 |
Mpt 30B Orca Mini | 0.1 | 0K / 180.5 GB | 9 | 1 |
Mpt 30B V2 | 0.1 | 0K / 60.1 GB | 11 | 10 |
Mpt 30B V3 | 0.1 | 0K / 60.1 GB | 10 | 2 |
...s Mpt 30B Gpt4 1p4 Five Epochs | 0.1 | 0K / 60.1 GB | 18 | 7 |
Mpt 30B Qlora Multi GPU | 0.1 | 0K / GB | 13 | 1 |
...t 30B Instruct Peft Compatible | 0.1 | 0K / 60.1 GB | 17 | 2 |
Mpt 30B Qlora Compatible | 0.1 | 0K / 60.1 GB | 10 | 11 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐