Model Type |
| |||||||||||||||||||||||||
Use Cases |
| |||||||||||||||||||||||||
Additional Notes |
| |||||||||||||||||||||||||
Supported Languages |
| |||||||||||||||||||||||||
Training Details |
| |||||||||||||||||||||||||
Input Output |
| |||||||||||||||||||||||||
Release Notes |
|
LLM Name | Merak 7B V4 |
Repository ๐ค | https://huggingface.co/Ichsan2895/Merak-7B-v4 |
Model Size | 7b |
Required VRAM | 28.9 GB |
Updated | 2025-02-22 |
Maintainer | Ichsan2895 |
Model Type | mistral |
Model Files | |
Supported Languages | id en |
Model Architecture | MistralForCausalLM |
License | cc-by-nc-sa-4.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.34.1 |
Tokenizer Class | LlamaTokenizer |
Padding Token | </s> |
Vocabulary Size | 32002 |
Torch Data Type | float32 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
USK Mistral 7B Unsloth GGUF | 0 | 28 | 4 GB |
Merak 7B V4 GPTQ | 2 | 11 | 4 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...Nemo Instruct 2407 Abliterated | 1000K / 24.5 GB | 4620 | 11 |
MegaBeam Mistral 7B 512K | 512K / 14.4 GB | 5681 | 50 |
SpydazWeb AI HumanAI RP | 512K / 14.4 GB | 12 | 1 |
SpydazWeb AI HumanAI 002 | 512K / 14.4 GB | 18 | 1 |
...daz Web AI ChatML 512K Project | 512K / 14.5 GB | 12 | 0 |
MegaBeam Mistral 7B 300K | 282K / 14.4 GB | 5633 | 16 |
Hebrew Mistral 7B 200K | 256K / 30 GB | 14619 | 15 |
Astral 256K 7B V2 | 250K / 14.4 GB | 7 | 0 |
Astral 256K 7B | 250K / 14.4 GB | 5 | 0 |
Test001 | 128K / 14.5 GB | 9 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐