LLM Name | LuminRP 13B 128K |
Repository | Open on ๐ค |
Model Size | 13b |
Required VRAM | 25.8 GB |
Updated | 2024-07-27 |
Maintainer | Ppoyaa |
Model Type | mixtral |
Model Files | |
Model Architecture | MixtralForCausalLM |
License | apache-2.0 |
Context Length | 131072 |
Model Max Length | 131072 |
Transformers Version | 4.40.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <s> |
Vocabulary Size | 32000 |
Torch Data Type | bfloat16 |
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
T3Q MSlerp 13B | 0.3 | 32K / 51.8 GB | 292 | 0 |
Yunconglong 13B Slerp | 0.3 | 32K / 25.7 GB | 237 | 0 |
Eclipse 13B DPO | 0.2 | 32K / 25.8 GB | 293 | 0 |
13B MATH DPO | 0.2 | 32K / 25.8 GB | 301 | 1 |
...et 7Bx2 MoE 13B 3.0bpw H6 EXL2 | 0.2 | 32K / 5.1 GB | 21 | 0 |
...et 7Bx2 MoE 13B 6.0bpw H6 EXL2 | 0.2 | 32K / 9.8 GB | 6 | 3 |
...et 7Bx2 MoE 13B 4.0bpw H6 EXL2 | 0.2 | 32K / 6.7 GB | 7 | 1 |
WordWoven 13B GPTQ | 0.2 | 32K / 7.1 GB | 6 | 3 |
WordWoven 13B AWQ | 0.2 | 32K / 7.1 GB | 7 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐