Model Type |
| |||
Additional Notes |
| |||
Input Output |
|
LLM Name | Llama 3 8B Instruct GPTQ 4 Bit |
Repository ๐ค | https://huggingface.co/astronomer/Llama-3-8B-Instruct-GPTQ-4-Bit |
Model Name | Meta-Llama-3-8B-Instruct |
Model Creator | astronomer-io |
Base Model(s) | |
Model Size | 8b |
Required VRAM | 5.7 GB |
Updated | 2024-12-23 |
Maintainer | astronomer |
Model Type | llama |
Instruction-Based | Yes |
Model Files | |
GPTQ Quantization | Yes |
Quantization Type | gptq |
Model Architecture | LlamaForCausalLM |
License | other |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.38.2 |
Tokenizer Class | PreTrainedTokenizerFast |
Vocabulary Size | 128256 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...a 3 8B Instruct 262K 4bit GPTQ | 256K / 5.8 GB | 9 | 1 |
... 8B Instruct 262K 4bit GPTQ 02 | 256K / 5.7 GB | 7 | 0 |
...lama 3.1 8B Instruct GPTQ INT4 | 128K / 5.8 GB | 88760 | 21 |
...Instruct 80K Qlora Merged GPTQ | 80K / 5.8 GB | 21 | 0 |
...oLeo Instruct 8B 32K V0.1 GPTQ | 64K / 5.7 GB | 7 | 0 |
...eta Llama 3 8B Instruct Marlin | 8K / 5.7 GB | 4075 | 0 |
Llama 3 8B Instruct GPTQ 8 Bit | 8K / 9.7 GB | 4300 | 25 |
Meta Llama 3 8B Instruct GPTQ | 8K / 5.7 GB | 721 | 3 |
Meta Llama 3 8B Instruct GPTQ | 8K / 5.8 GB | 10 | 0 |
Meta Llama 3 8B Instruct GPTQ | 8K / 5.8 GB | 14 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐