LLM Name | Proximus 2x7B V1 |
Repository | Open on ๐ค |
Model Size | 12.9b |
Required VRAM | 25.8 GB |
Updated | 2024-07-27 |
Maintainer | preemware |
Model Type | mixtral |
Model Files | |
Model Architecture | MixtralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.36.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <s> |
Vocabulary Size | 32002 |
Torch Data Type | float16 |
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
MixTAO 7Bx2 MoE V8.1 | 0.3 | 32K / 25.8 GB | 7772 | 51 |
T3Q MSlerp 7Bx2 | 0.3 | 32K / 51.8 GB | 296 | 0 |
MergedExpert 2x8b | 0.3 | 32K / 25.8 GB | 266 | 0 |
RAG KO Mixtral 7Bx2 V2.1 | 0.3 | 32K / 25.8 GB | 19252 | 8 |
... TomGrc FusionNet 7Bx2 MoE 13B | 0.3 | 32K / 25.8 GB | 7020 | 53 |
DARE TIES 13B | 0.3 | 32K / 25.7 GB | 7030 | 10 |
MultiMash12 13B Slerp | 0.3 | 32K / 25.7 GB | 255 | 0 |
MultiMash11 13B Slerp | 0.3 | 32K / 25.7 GB | 244 | 0 |
MultiMash10 13B Slerp | 0.3 | 32K / 25.7 GB | 233 | 0 |
MultiMash9 13B Slerp | 0.3 | 32K / 25.7 GB | 239 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐