Model Type |
| |
Additional Notes |
|
LLM Name | Lorge 2x7B UAMM |
Repository ๐ค | https://huggingface.co/Alsebay/Lorge-2x7B-UAMM |
Merged Model | Yes |
Model Size | 19.2b |
Required VRAM | 38.2 GB |
Updated | 2024-12-03 |
Maintainer | Alsebay |
Model Type | mixtral |
Model Files | |
Model Architecture | MixtralForCausalLM |
License | cc-by-nc-4.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.40.0 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <s> |
Vocabulary Size | 32000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
MixTAO 19B Pass | 32K / 38.1 GB | 16 | 1 |
Multimerge 19B Pass | 32K / 38 GB | 10 | 0 |
Mistralmath 15B Pass | 32K / 38.5 GB | 11 | 0 |
TaoPassthrough 15B S | 32K / 38.4 GB | 20 | 0 |
Raccoon Small | 32K / 38.4 GB | 86 | 1 |
...oundary Solar Chat 2x10.7B MoE | 4K / 38 GB | 123 | 1 |
Mixtral 11Bx2 MoE 19B | 4K / 38.4 GB | 1275 | 38 |
Truthful DPO MoE 19B | 4K / 38.4 GB | 1207 | 1 |
Venus DPO 50 | 4K / 38.4 GB | 1210 | 0 |
SOLAR Math 2x10.7B V0.2 | 4K / 38.4 GB | 1215 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐