LLM Name | Leo Hessianai 7B Chat |
Repository ๐ค | https://huggingface.co/LeoLM/leo-hessianai-7b-chat |
Model Size | 7b |
Required VRAM | 13.5 GB |
Updated | 2024-09-18 |
Maintainer | LeoLM |
Model Type | llama |
Instruction-Based | Yes |
Model Files | |
Supported Languages | en de |
Model Architecture | LlamaForCausalLM |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.33.1 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <PAD> |
Vocabulary Size | 32128 |
Torch Data Type | float16 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
Leo Hessianai 7B Chat GGUF | 4 | 188 | 2 GB |
Leo Hessianai 7B Chat GPTQ | 0 | 13 | 3 GB |
Leo Hessianai 7B Chat AWQ | 1 | 5 | 3 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Llama 2 7B 32K Instruct | 32K / 13.5 GB | 10543 | 160 |
Qwen2 7B Instruct Llama | 32K / 15.2 GB | 4 | 3 |
Qwen2 7B Instruct Mistral | 32K / 15.2 GB | 3 | 1 |
AIRIC The Mistral | 32K / 14.4 GB | 681 | 7 |
...ls Vikhr 7B Instruct 0.4 4bits | 32K / 5.2 GB | 5 | 0 |
Vikhr 7B Instruct 0.3 | 32K / 14.6 GB | 20 | 4 |
... Samantha Philosopher 7B Slerp | 32K / 13.4 GB | 4 | 2 |
Vikhr 7B Instruct Merged | 32K / 14.6 GB | 4 | 4 |
Clinical Ehr Prototype 0.1 | 32K / 14.4 GB | 14 | 2 |
...stral 7B Instruct V0.1 Sharded | 32K / 14.4 GB | 1300 | 13 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐