Additional Notes | This model is a merge of various pre-trained language models using the mergekit tool and the Model Stock merge method. |
LLM Name | Hathor V4 |
Repository ๐ค | https://huggingface.co/MrRobotoAI/Hathor-v4 |
Base Model(s) | |
Merged Model | Yes |
Model Size | 8b |
Required VRAM | 14.5 GB |
Updated | 2024-11-11 |
Maintainer | MrRobotoAI |
Model Type | mistral |
Instruction-Based | Yes |
Model Files | |
Model Architecture | MistralForCausalLM |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.44.1 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 32000 |
LoRA Model | Yes |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Ministrations 8B V1 | 32K / 16.1 GB | 77 | 12 |
Ministral 8B Instruct 2410 HF | 32K / 32 GB | 974 | 10 |
...ect Ministral8Bit Mg DPO Psdp2 | 32K / 16.1 GB | 61 | 0 |
...ruct 2410 MetaMathQA DPO Iter1 | 32K / 16.1 GB | 364 | 0 |
...t Ministral8Bit MMQA DPO Iter1 | 32K / 60.8 GB | 88 | 0 |
...t Ministral8Bit Math DPO Iter1 | 32K / 16.1 GB | 52 | 0 |
...t Ministral8Bit MMQA Mix Iter2 | 32K / 16.1 GB | 131 | 0 |
...t Ministral8Bit MMQA DPO Iter1 | 32K / 16.1 GB | 35 | 0 |
...ruct 2410 MetaMathQA DPO Iter2 | 32K / 16.1 GB | 77 | 0 |
...10 MetaMathQA DPO Iter5 Lr2e 7 | 32K / 16.1 GB | 43 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐