Model Type |
| |
Additional Notes |
|
LLM Name | Yi 34B Chat GGUF |
Repository ๐ค | https://huggingface.co/second-state/Yi-34B-Chat-GGUF |
Model Name | Yi 34B Chat |
Model Creator | 01-ai |
Base Model(s) | |
Model Size | 34b |
Required VRAM | 12.8 GB |
Updated | 2024-12-22 |
Maintainer | second-state |
Model Type | llama |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | gguf|q2|q4_k|q5_k |
Model Architecture | LlamaForCausalLM |
License | apache-2.0 |
Context Length | 4096 |
Model Max Length | 4096 |
Transformers Version | 4.35.0 |
Vocabulary Size | 64000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...atAllInOne Yi 34B 200K V1 GGUF | 195K / 12.8 GB | 222 | 0 |
...Yi Ties 34B V1.0 MLX Q8 0.gguf | 195K / 36.5 GB | 20 | 0 |
Dolphin 2.2 Yi 34B GGUF | 16K / 12.8 GB | 169 | 1 |
Yi 1.5 34B Chat GGUF | 4K / 8.9 GB | 226 | 5 |
...ionStar Yi 34B Chat Llama GGUF | 4K / 12.8 GB | 203 | 2 |
...mantha 1.11 CodeLlama 34B GGUF | 2K / 12.5 GB | 112 | 2 |
Yi 34B 200K RPMerge | 195K / 68.9 GB | 487 | 60 |
...34B 200K Aezakmi Raw 1902 EXL2 | 195K / 20.7 GB | 10 | 1 |
Yi 34B 200K MAR2024 EXL2 4bpw | 195K / 18 GB | 9 | 1 |
...B 200K RPMerge 4.65bpw H6 EXL2 | 195K / 10.5 GB | 12 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐