๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
Quantum DPO V0.1 GGUF | 1 | 223 | 3 GB |
Quantum DPO V0.1 AWQ | 0 | 6 | 4 GB |
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Openchat 3.5 0106 128K DPO | — | 128K / 14.4 GB | 718 | 2 |
Mixtral AI Cyber 3.0 | — | 32K / 14.3 GB | 739 | 0 |
Em German Leo Mistral | — | 32K / 14.4 GB | 3123 | 63 |
Multi Verse Model | — | 32K / 14.4 GB | 5957 | 34 |
Openchat 3.5 16K | — | 32K / 14.4 GB | 1371 | 31 |
Go Bruins V2 | — | 32K / 14.4 GB | 1244 | 30 |
Go Bruins V2.1.1 | — | 32K / 14.4 GB | 1363 | 22 |
Em German Mistral V01 | — | 32K / 14.4 GB | 93 | 18 |
Phoenix | — | 32K / 14.4 GB | 18 | 16 |
Navarna V0 1 OpenHermes Hindi | — | 32K / 14.4 GB | 11 | 16 |
LLM Name | Quantum DPO V0.1 |
Repository | Open on ๐ค |
Model Size | 7.2b |
Required VRAM | 14.4 GB |
Updated | 2024-07-04 |
Maintainer | quantumaikr |
Model Type | mistral |
Model Files | |
Supported Languages | en |
Model Architecture | MistralForCausalLM |
License | cc-by-nc-4.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.36.1 |
Tokenizer Class | LlamaTokenizer |
Padding Token | </s> |
Vocabulary Size | 32000 |
Initializer Range | 0.02 |
Torch Data Type | float16 |