LLM Name | Miqu 1 70B 24GB VRAM IQ2 XS SOTA |
Repository ๐ค | https://huggingface.co/amthe/Miqu-1-70b-24GB-VRAM-IQ2-XS-SOTA |
Model Size | 70b |
Required VRAM | 20.3 GB |
Updated | 2024-12-22 |
Maintainer | amthe |
Model Type | llama |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | LlamaForCausalLM |
Context Length | 32764 |
Model Max Length | 32764 |
Transformers Version | 4.37.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <unk> |
Vocabulary Size | 32000 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Reflection Llama 3.1 70B Bf16 | 128K / 141.9 GB | 283 | 6 |
Reflection Llama 3.1 70B GGUF | 128K / 26.4 GB | 149 | 4 |
...Horizon AI Korean Advanced 70B | 128K / 141.9 GB | 51 | 0 |
Midnight Miqu 70B V1.0 GGUF | 31K / 29.9 GB | 267 | 4 |
...ma3 70B Chinese Chat GGUF 4bit | 8K / 40 GB | 932 | 18 |
Llama 3 70B Quantised | 8K / 48.7 GB | 12 | 2 |
...3 Mega Dolphin 2.9.1 120b GGUF | 8K / 18.4 GB | 5 | 1 |
Meta Llama 3 70B Instruct | 8K / 40.3 GB | 10 | 1 |
CodeLlama 70B Instruct Hf GGUF | 4K / 25.5 GB | 174 | 2 |
Openthaigpt 1.0.0 70B Chat | 2K / 138.4 GB | 498 | 11 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐