Codellama CodeLlama 7B Instruct Hf W4 G128 AWQ by abhinavkulkarni

 ยป  All LLMs  ยป  abhinavkulkarni  ยป  Codellama CodeLlama 7B Instruct Hf W4 G128 AWQ   URL Share it on

  Arxiv:2308.12950   Autotrain compatible   Awq   Code   Codegen   Instruct   Llama   Llama2   Pytorch   Quantized   Region:us

Codellama CodeLlama 7B Instruct Hf W4 G128 AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Codellama CodeLlama 7B Instruct Hf W4 G128 AWQ (abhinavkulkarni/codellama-CodeLlama-7b-Instruct-hf-w4-g128-awq)

Codellama CodeLlama 7B Instruct Hf W4 G128 AWQ Parameters and Internals

Model Type 
generative, text
Additional Notes 
The model is a collection of pretrained and fine-tuned generative text models with AWQ quantization. It is a 4-bit 128 group size AWQ quantized model.
LLM NameCodellama CodeLlama 7B Instruct Hf W4 G128 AWQ
Repository ๐Ÿค—https://huggingface.co/abhinavkulkarni/codellama-CodeLlama-7b-Instruct-hf-w4-g128-awq 
Model Size7b
Required VRAM3.9 GB
Updated2024-12-22
Maintainerabhinavkulkarni
Model Typellama
Instruction-BasedYes
Model Files  3.9 GB
Supported Languagescode
AWQ QuantizationYes
Quantization Typeawq
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length16384
Model Max Length16384
Transformers Version4.33.1
Tokenizer ClassCodeLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32016
Torch Data Typefloat16

Best Alternatives to Codellama CodeLlama 7B Instruct Hf W4 G128 AWQ

Best Alternatives
Context / RAM
Downloads
Likes
CodeLlama 7B Instruct AWQ16K / 3.9 GB1764
...ruct Solidity Bnb 4bit Smashed16K / 4.2 GB140
...B Instruct Hf Bnb 4bit Smashed16K / 4.2 GB210
CodelLama7B Inst DPO 7K Mlx16K / 4.2 GB82
...eLlama 7B Instruct Hf 4bit MLX16K / 4.2 GB121
...6.7B Instruct 8.0bpw H8 EXL2 216K / 6.8 GB92
...6.7B Instruct 3.0bpw H6 EXL2 216K / 2.8 GB91
... 7B Instruct Nf4 Fp16 Upscaled16K / 13.5 GB150
CodeLlama 7B Instruct Fp1616K / 13.5 GB338
...Llama 7B Instruct Bf16 Sharded16K / 13.5 GB161
Note: green Score (e.g. "73.2") means that the model is better than abhinavkulkarni/codellama-CodeLlama-7b-Instruct-hf-w4-g128-awq.

Rank the Codellama CodeLlama 7B Instruct Hf W4 G128 AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217