Model Type |
| ||||||||||||||||||
Additional Notes |
| ||||||||||||||||||
Training Details |
| ||||||||||||||||||
Input Output |
| ||||||||||||||||||
Release Notes |
|
LLM Name | Dolphin 2.6 Mistral 7B DPO |
Repository ๐ค | https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo |
Model Size | 7b |
Required VRAM | 14.4 GB |
Updated | 2025-02-22 |
Maintainer | cognitivecomputations |
Model Type | mistral |
Instruction-Based | Yes |
Model Files | |
Supported Languages | en |
Model Architecture | MistralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.36.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <|im_end|> |
Vocabulary Size | 32001 |
Torch Data Type | bfloat16 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
...olphin 2.6 Mistral 7B DPO GGUF | 21 | 469 | 3 GB |
Dolphin 2.6 Mistral 7B DPO AWQ | 2 | 74 | 4 GB |
...olphin 2.6 Mistral 7B DPO GPTQ | 9 | 40 | 4 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...Nemo Instruct 2407 Abliterated | 1000K / 24.5 GB | 4620 | 11 |
SpydazWeb AI HumanAI RP | 512K / 14.4 GB | 12 | 1 |
SpydazWeb AI HumanAI 002 | 512K / 14.4 GB | 18 | 1 |
...daz Web AI ChatML 512K Project | 512K / 14.5 GB | 12 | 0 |
... Summarize 64K QLoRANET Merged | 128K / 4.1 GB | 12 | 0 |
...1 Summarize 64K LoRANET Merged | 128K / 14.4 GB | 11 | 0 |
Mistral 7B Instruct V0.2 | 32K / 14.4 GB | 3316095 | 2630 |
Mistral 7B Instruct V0.1 | 32K / 14.4 GB | 191630 | 1573 |
...ity Instruct 7M Gen Mistral 7B | 32K / 14.4 GB | 3750 | 5 |
...ty Instruct 3M 0625 Mistral 7B | 32K / 14.4 GB | 3706 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐