Model Type |
| |||||||||
Supported Languages |
| |||||||||
Training Details |
| |||||||||
Input Output |
|
LLM Name | Eclipse 13B DPO |
Repository ๐ค | https://huggingface.co/Xenon1/Eclipse-13B-dpo |
Model Size | 13b |
Required VRAM | 25.8 GB |
Updated | 2025-02-22 |
Maintainer | Xenon1 |
Model Type | mixtral |
Model Files | |
Supported Languages | en |
Model Architecture | MixtralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.37.1 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <s> |
Vocabulary Size | 32000 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
LuminRP 13B 128K | 128K / 25.8 GB | 13 | 2 |
Yunconglong 13B Slerp | 32K / 25.7 GB | 13 | 0 |
T3Q MSlerp 13B | 32K / 51.8 GB | 20 | 0 |
13B MATH DPO | 32K / 25.8 GB | 34 | 1 |
...et 7Bx2 MoE 13B 6.0bpw H6 EXL2 | 32K / 9.8 GB | 10 | 3 |
...et 7Bx2 MoE 13B 4.0bpw H6 EXL2 | 32K / 6.7 GB | 7 | 1 |
...et 7Bx2 MoE 13B 3.0bpw H6 EXL2 | 32K / 5.1 GB | 8 | 0 |
WordWoven 13B AWQ | 32K / 7.1 GB | 64 | 2 |
WordWoven 13B GPTQ | 32K / 7.1 GB | 9 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐