Hermes 2 Pro Mistral 7B by NousResearch

 ยป  All LLMs  ยป  NousResearch  ยป  Hermes 2 Pro Mistral 7B   URL Share it on

  Autotrain compatible Base model:mistralai/mistral-7...   Chatml   Conversational   Dataset:teknium/openhermes-2.5   Distillation   Dpo   En   Endpoints compatible   Finetuned   Function calling   Gpt4   Instruct   Json mode   License:apache-2.0   Mistral   Region:us   Rlhf   Safetensors   Sharded   Synthetic data   Tensorflow

Rank the Hermes 2 Pro Mistral 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Hermes 2 Pro Mistral 7B (NousResearch/Hermes-2-Pro-Mistral-7B)

Quantized Models of the Hermes 2 Pro Mistral 7B

Model
Likes
Downloads
VRAM
...Mistral 7B Metropole 4bit Gguf0264 GB
Hermes 2 Pro Mistral 7B AWQ094 GB

Best Alternatives to Hermes 2 Pro Mistral 7B

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
KAI 7B V0.174.4532K / 14.4 GB349
Dolphin 2.2.1 Mistral 7B73.1732K / 14.4 GB13484185
...own Clown 7B Tak Stack DPO AWQ6732K / 4.2 GB50
NeuralMonarch 7B AWQ66.932K / 4.2 GB50
AlphaHitchhiker 7B AWQ66.632K / 4.2 GB50
...ake 7B V2 Laser Truthy DPO AWQ66.132K / 4.2 GB50
Mistral 7B Instruct V0.265.7132K / 14.4 GB20959862207
...andle Dolphin 2.2.1 Mistral 7B64.232K / 14.4 GB2490
Dolphin 2.8 Slerp AWQ63.432K / 4.2 GB50
Newton 7B 5.0bpw H6 EXL260.88K / 4.7 GB60
Note: green Score (e.g. "73.2") means that the model is better than NousResearch/Hermes-2-Pro-Mistral-7B.

Hermes 2 Pro Mistral 7B Parameters and Internals

LLM NameHermes 2 Pro Mistral 7B
RepositoryOpen on ๐Ÿค— 
Base Model(s)  mistralai/Mistral-7B-v0.1   mistralai/Mistral-7B-v0.1
Model Size7b
Required VRAM14.5 GB
Updated2024-05-14
MaintainerNousResearch
Model Typemistral
Model Files  4.0 GB: 1-of-4   3.9 GB: 2-of-4   3.9 GB: 3-of-4   2.7 GB: 4-of-4
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.38.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32032
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 34044 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801