LLM Name | Smaug 72B V0.1 |
Repository | Open on ๐ค |
Base Model(s) | |
Model Size | 72b |
Required VRAM | 144.5 GB |
Updated | 2024-07-27 |
Maintainer | abacusai |
Model Type | llama |
Model Files | |
Model Architecture | LlamaForCausalLM |
License | other |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.36.2 |
Tokenizer Class | GPT2Tokenizer |
Vocabulary Size | 152064 |
LoRA Model | Yes |
Torch Data Type | bfloat16 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
Smaug 72B V0.1 AWQ | 3 | 50 | 41 GB |
Smaug 72B V0.1 GPTQ | 2 | 28 | 41 GB |
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Rhea 72B V0.5 | 0.3 | 32K / 144.5 GB | 2850 | 130 |
JuliusCesar 72B BeyonderV.0 | 0.3 | 32K / 74.2 GB | 327 | 0 |
TW3 JRGL V2 | 0.3 | 32K / 79.7 GB | 2727 | 0 |
Le Triomphant ECE TW3 | 0.3 | 32K / 79.7 GB | 2730 | 3 |
ECE TW3 JRGL V3 | 0.3 | 32K / 77.8 GB | 2066 | 0 |
Rhea 125 V0.5 | 0.2 | 32K / 249 GB | 237 | 0 |
Exodius 70B | 0.2 | 32K / 144.6 GB | 245 | 0 |
MoMo 72B Lora 1.8.7 DPO | 0.2 | 32K / 208.5 GB | 1539 | 68 |
ECE TW3 JRGL V5 | 0.2 | 32K / 159.6 GB | 571 | 0 |
MoMo 72B LoRA V1.4 | 0.2 | 32K / 208.5 GB | 937 | 87 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐