Model Type |
| ||||||||||
Additional Notes | This model offers long responses, lower hallucination rate, and is not subject to OpenAI censorship of training data. Uses GGML and GGUF formats, with Llama 70B GPU acceleration options. | ||||||||||
Training Details |
|
LLM Name | Nous Hermes Llama2 70B GGML |
Repository ๐ค | https://huggingface.co/TheBloke/Nous-Hermes-Llama2-70B-GGML |
Model Name | Nous Hermes Llama2 70B |
Model Creator | NousResearch |
Base Model(s) | |
Model Size | 70b |
Required VRAM | 29 GB |
Updated | 2024-11-12 |
Maintainer | TheBloke |
Model Type | llama |
Model Files | |
Supported Languages | en |
GGML Quantization | Yes |
Quantization Type | ggml |
Model Architecture | AutoModel |
License | llama2 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Llama 2 70B Chat GGML | 0K / 28.6 GB | 85 | 161 |
Llama 2 70B GGML | 0K / 28.6 GB | 122 | 74 |
Synthia 70B V1.1 GGML | 0K / 28.6 GB | 32 | 4 |
...iction.live Kimiko V2 70B GGML | 0K / 28.6 GB | 22 | 2 |
Lemur 70B Chat V1 GGML | 0K / 29 GB | 26 | 3 |
...boros L2 70B 2.1 Creative GGML | 0K / 28.6 GB | 2 | 3 |
Model 007 70B GGML | 0K / 28.6 GB | 12 | 1 |
Llama2 70B OASST SFT V10 GGML | 0K / 29 GB | 29 | 4 |
Llama 2 70B Orca 200K GGML | 0K / 28.6 GB | 24 | 3 |
Synthia 70B GGML | 0K / 28.6 GB | 28 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐