Llama 3 15B Instruct Ft V2 AWQ by solidrust

 ยป  All LLMs  ยป  solidrust  ยป  Llama 3 15B Instruct Ft V2 AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq Base model:elinas/llama-3-15b-... Base model:quantized:elinas/ll...   Conversational   Endpoints compatible   Instruct   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Llama 3 15B Instruct Ft V2 AWQ Benchmarks

Llama 3 15B Instruct Ft V2 AWQ (solidrust/Llama-3-15B-Instruct-ft-v2-AWQ)

Llama 3 15B Instruct Ft V2 AWQ Parameters and Internals

Model Type 
text-generation
Additional Notes 
AWQ is a low-bit weight quantization method for faster Transformers-based inference with 4-bit quantization.
Input Output 
Accepted Modalities:
text
LLM NameLlama 3 15B Instruct Ft V2 AWQ
Repository ๐Ÿค—https://huggingface.co/solidrust/Llama-3-15B-Instruct-ft-v2-AWQ 
Base Model(s)  Llama 3 15B Instruct Ft V2   elinas/Llama-3-15B-Instruct-ft-v2
Model Size15b
Required VRAM9.4 GB
Updated2024-12-21
Maintainersolidrust
Model Typellama
Instruction-BasedYes
Model Files  5.0 GB: 1-of-2   4.4 GB: 2-of-2
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureLlamaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.41.1
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typefloat16

Best Alternatives to Llama 3 15B Instruct Ft V2 AWQ

Best Alternatives
Context / RAM
Downloads
Likes
L3 Aethora 15B V2 EXL2 4.0bpw8K / 8.4 GB111
L3 Aethora 15B V2 EXL2 5.0bpw8K / 10.2 GB101
L3 Aethora 15B V2 EXL2 6.0bpw8K / 11.9 GB71
L3 Aethora 15B V2 EXL2 8.0bpw8K / 12.4 GB31
L3 Aethora 15B V28K / 30.1 GB11040
OpenCrystal 15B L3 V38K / 30 GB1759
OpenCrystal 15B L3 V28K / 30 GB275
OpenCrystal L3 15B V2.18K / 30 GB115
Meta Llama 3 15B Instruct8K / 23.9 GB101
Llama 3 15B Instruct Ft V28K / 30.1 GB944
Note: green Score (e.g. "73.2") means that the model is better than solidrust/Llama-3-15B-Instruct-ft-v2-AWQ.

Rank the Llama 3 15B Instruct Ft V2 AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217