LLM Name | Self Reflect Ministral8Bit Mg DPO Psdp2 |
Repository ๐ค | https://huggingface.co/RyanYr/self-reflect_ministral8Bit_mg_dpo_psdp2 |
Model Name | self-reflect_ministral8Bit_mg_dpo_psdp2 |
Base Model(s) | |
Model Size | 8b |
Required VRAM | 16.1 GB |
Updated | 2024-11-15 |
Maintainer | RyanYr |
Model Type | mistral |
Instruction-Based | Yes |
Model Files | |
Model Architecture | MistralForCausalLM |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.45.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | [PAD] |
Vocabulary Size | 131073 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Ministral 8B Instruct 2410 HF | 32K / 32 GB | 51015 | 10 |
Ministrations 8B V1 | 32K / 16.1 GB | 143 | 15 |
Ministral 8B Slerp | 32K / 29.2 GB | 27 | 0 |
...flect Mini8Bit Om2 460k Sft T1 | 32K / 16.1 GB | 130 | 0 |
...ruct 2410 MetaMathQA DPO Iter1 | 32K / 16.1 GB | 387 | 0 |
...t Ministral8Bit MMQA Mix Iter2 | 32K / 16.1 GB | 132 | 0 |
...t Ministral8Bit MMQA DPO Iter1 | 32K / 60.8 GB | 89 | 0 |
...t Ministral8Bit Math DPO Iter1 | 32K / 16.1 GB | 53 | 0 |
...t Ministral8Bit MMQA DPO Iter1 | 32K / 16.1 GB | 42 | 0 |
...ruct 2410 MetaMathQA DPO Iter2 | 32K / 16.1 GB | 78 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐