Model Type |
| ||||||||||||||||||
Use Cases |
| ||||||||||||||||||
Additional Notes |
| ||||||||||||||||||
Supported Languages |
| ||||||||||||||||||
Training Details |
| ||||||||||||||||||
Safety Evaluation |
| ||||||||||||||||||
Responsible Ai Considerations |
| ||||||||||||||||||
Input Output |
| ||||||||||||||||||
Release Notes |
|
LLM Name | Meta Llama 3 8B Hf |
Repository π€ | https://huggingface.co/Undi95/Meta-Llama-3-8B-hf |
Model Size | 8b |
Required VRAM | 16.1 GB |
Updated | 2025-03-25 |
Maintainer | Undi95 |
Model Type | llama |
Model Files | |
Supported Languages | en |
Model Architecture | LlamaForCausalLM |
License | other |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.40.0.dev0 |
Tokenizer Class | PreTrainedTokenizerFast |
Vocabulary Size | 128256 |
Torch Data Type | bfloat16 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
Meta Llama 3 8B AWQ | 2 | 59 | 5 GB |
Meta Llama 3 8B Hf AWQ | 0 | 13 | 5 GB |
Meta Llama 3 8B AWQ | 0 | 6 | 5 GB |
Meta Llama 3 8B AWQ | 0 | 6 | 5 GB |
Meta Llama 3 8B AWQ | 0 | 9 | 5 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...a 3 8B Instruct Gradient 1048K | 1024K / 16.1 GB | 5220 | 681 |
A8 | 1024K / 16.1 GB | 284 | 0 |
A2 | 1024K / 16.1 GB | 363 | 0 |
A4 | 1024K / 16.1 GB | 305 | 0 |
A6 | 1024K / 16.1 GB | 421 | 0 |
A18 | 1024K / 16.1 GB | 272 | 0 |
A1 | 1024K / 16.1 GB | 343 | 0 |
A10 | 1024K / 16.1 GB | 303 | 0 |
A12 | 1024K / 16.1 GB | 256 | 0 |
C31 | 1024K / 16.1 GB | 183 | 0 |
π Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! π