Codellama 34B Bnb 4bit by unsloth

 ยป  All LLMs  ยป  unsloth  ยป  Codellama 34B Bnb 4bit   URL Share it on

  4-bit   4bit   Autotrain compatible   Bitsandbytes   Codegen   Codellama   Codellama-34b   En   Endpoints compatible   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow   Unsloth

Codellama 34B Bnb 4bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Codellama 34B Bnb 4bit Parameters and Internals

Model Type 
text generation, transformers
Use Cases 
Areas:
research, commercial applications
Applications:
finetuning models for specific domains
Considerations:
Unsloth provides Colab notebooks for accessible finetuning.
Additional Notes 
Focused on providing convenient, memory-efficient approaches with Colab-supported notebooks for popular models.
Training Details 
Hardware Used:
A100
Input Output 
Accepted Modalities:
text
Performance Tips:
Ensure to use correct resource allocation in Colab for optimal performance.
LLM NameCodellama 34B Bnb 4bit
Repository ๐Ÿค—https://huggingface.co/unsloth/codellama-34b-bnb-4bit 
Model Size34b
Required VRAM18.2 GB
Updated2024-11-21
Maintainerunsloth
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   5.0 GB: 3-of-4   3.2 GB: 4-of-4
Supported Languagesen
Quantization Type4bit
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length16384
Model Max Length16384
Transformers Version4.44.2
Tokenizer ClassCodeLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Padding Token<unk>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typebfloat16
Codellama 34B Bnb 4bit (unsloth/codellama-34b-bnb-4bit)

Quantized Models of the Codellama 34B Bnb 4bit

Model
Likes
Downloads
VRAM
Codellama Extraction01467 GB

Best Alternatives to Codellama 34B Bnb 4bit

Best Alternatives
Context / RAM
Downloads
Likes
Codellama Extraction16K / 67.6 GB140
CodeLlama 34B Python Fp1616K / 67.5 GB225912
CodeLlama 34B Instruct Fp1616K / 67.5 GB23677
Phind Codellama 34B V2 EXL216K /  GB1816
CodeLlama 34B Fp1616K / 67.5 GB244
XwinCoder 34B 4.0bpw H6 EXL216K / 17.4 GB151
...Codellama 34B V2 Megacode EXL216K /  GB4010
...gpt 32K Codellama 34B Instruct32K / 67.5 GB8692
CodeLlama 34B Instruct Hf16K / 67.5 GB12497278
ReflectionCoder CL 34B16K / 67.6 GB35220
Note: green Score (e.g. "73.2") means that the model is better than unsloth/codellama-34b-bnb-4bit.

Rank the Codellama 34B Bnb 4bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 38199 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241110