LLM Name | Dolphin 2.9.1 Llama 3 70B AWQ |
Repository ๐ค | https://huggingface.co/julep-ai/dolphin-2.9.1-llama-3-70b-awq |
Base Model(s) | |
Model Size | 70b |
Required VRAM | 39.9 GB |
Updated | 2024-09-18 |
Maintainer | julep-ai |
Model Type | llama |
Model Files | |
AWQ Quantization | Yes |
Quantization Type | awq |
Model Architecture | LlamaForCausalLM |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.41.0 |
Tokenizer Class | PreTrainedTokenizerFast |
Padding Token | <|end_of_text|> |
Vocabulary Size | 128258 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...0B Instruct Gradient 1048K AWQ | 1024K / 39.9 GB | 2 | 1 |
...70B Instruct Gradient 262K AWQ | 256K / 39.9 GB | 6 | 0 |
MultiVerse 70B AWQ | 32K / 41.3 GB | 64 | 2 |
Opus V1.2 70B AWQ | 32K / 36.7 GB | 16 | 1 |
QuartetAnemoi 70B T0.0001 AWQ | 31K / 36.7 GB | 20 | 1 |
Senku 70B AWQ 4bit GEMM | 31K / 36.7 GB | 17 | 1 |
Kiqu 70B AWQ | 31K / 36.7 GB | 27 | 1 |
CodeLlama 70B Hf AWQ | 16K / 36.6 GB | 25 | 4 |
Llama 3 70B Instruct AWQ | 8K / 39.9 GB | 35801 | 66 |
...Sft M1 D5 Abliterated AWQ 4bit | 8K / 39.9 GB | 1314 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐