Hermes 2 Pro Llama 3 8B by NousResearch

 ยป  All LLMs  ยป  NousResearch  ยป  Hermes 2 Pro Llama 3 8B   URL Share it on

  Autotrain compatible   Axolotl Base model:nousresearch/meta-l...   Chatml   Conversational   Dataset:teknium/openhermes-2.5   Distillation   Dpo   En   Endpoints compatible   Finetuned   Function calling   Gpt4   Instruct   Json mode   Llama   Llama-3   Region:us   Rlhf   Safetensors   Sharded   Synthetic data   Tensorflow

Rank the Hermes 2 Pro Llama 3 8B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Hermes 2 Pro Llama 3 8B (NousResearch/Hermes-2-Pro-Llama-3-8B)

Quantized Models of the Hermes 2 Pro Llama 3 8B

Hermes 2 Pro Llama 3 8B GGUF116903 GB
...s Hermes 2 Pro Llama 3 8B GGUF1774 GB
...Llama3 Entity Mapping Gguf F16021916 GB Llama 3 8B AWQ 4bit Smashed0305 GB Llama 3 8B Bnb 4bit Smashed0146 GB

Best Alternatives to Hermes 2 Pro Llama 3 8B

Best Alternatives
HF Rank
BigBoiV14 V21024K / 16 GB60
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB31322564
Buzz 8B Large V0.51024K / 16.1 GB155726
Dolphin 2.9 Llama3 8B 1M1024K / 16.1 GB21724
Llama 3 8B 1M PoSE1024K / 16.1 GB17735
Meta Llama 3 8B 1M1024K / 16.1 GB3213
Llama 3 8B Instruct V41 1048K1024K / 16.1 GB33
Llama 3 8B Instruct 1048K1024K / 16.1 GB33
Pyg Llama 8B 1M 0.251024K / 16.1 GB352
Meta Llama 3 8B 1M V21024K / 16.1 GB701

Hermes 2 Pro Llama 3 8B Parameters and Internals

LLM NameHermes 2 Pro Llama 3 8B
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Meta Llama 3 8B   NousResearch/Meta-Llama-3-8B
Model Size8b
Required VRAM16.1 GB
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.40.1
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128288
Initializer Range0.02
Torch Data Typefloat16

What open-source LLMs or SLMs are you in search of? 35549 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801