๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
...Mistral 7B Metropole 4bit Gguf | 0 | 26 | 4 GB |
Hermes 2 Pro Mistral 7B AWQ | 0 | 9 | 4 GB |
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
KAI 7B V0.1 | 74.45 | 32K / 14.4 GB | 34 | 9 |
Dolphin 2.2.1 Mistral 7B | 73.17 | 32K / 14.4 GB | 13484 | 185 |
...own Clown 7B Tak Stack DPO AWQ | 67 | 32K / 4.2 GB | 5 | 0 |
NeuralMonarch 7B AWQ | 66.9 | 32K / 4.2 GB | 5 | 0 |
AlphaHitchhiker 7B AWQ | 66.6 | 32K / 4.2 GB | 5 | 0 |
...ake 7B V2 Laser Truthy DPO AWQ | 66.1 | 32K / 4.2 GB | 5 | 0 |
Mistral 7B Instruct V0.2 | 65.71 | 32K / 14.4 GB | 2095986 | 2207 |
...andle Dolphin 2.2.1 Mistral 7B | 64.2 | 32K / 14.4 GB | 249 | 0 |
Dolphin 2.8 Slerp AWQ | 63.4 | 32K / 4.2 GB | 5 | 0 |
Newton 7B 5.0bpw H6 EXL2 | 60.8 | 8K / 4.7 GB | 6 | 0 |
LLM Name | Hermes 2 Pro Mistral 7B |
Repository | Open on ๐ค |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 14.5 GB |
Updated | 2024-05-14 |
Maintainer | NousResearch |
Model Type | mistral |
Model Files | |
Supported Languages | en |
Model Architecture | MistralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.38.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | </s> |
Vocabulary Size | 32032 |
Initializer Range | 0.02 |
Torch Data Type | bfloat16 |