Codellama 70B Instruct Nf4 Fp16 Upscaled by arnavgrg

 ยป  All LLMs  ยป  arnavgrg  ยป  Codellama 70B Instruct Nf4 Fp16 Upscaled   URL Share it on

  Autotrain compatible   Codegen   Conversational   Endpoints compatible   Fp16   Instruct   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Codellama 70B Instruct Nf4 Fp16 Upscaled Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Codellama 70B Instruct Nf4 Fp16 Upscaled (arnavgrg/codellama-70b-instruct-nf4-fp16-upscaled)

Codellama 70B Instruct Nf4 Fp16 Upscaled Parameters and Internals

Model Type 
text generation
Additional Notes 
Quantization to nf4 is not lossless, and model weights for linear layers are lossy compared to the official base model.
Training Details 
Methodology:
Upscaled fp16 variant after nf4 4-bit quantization
Input Output 
Accepted Modalities:
text
Performance Tips:
Upscaling helps avoid quantization/dequantization costs for each inference pass.
LLM NameCodellama 70B Instruct Nf4 Fp16 Upscaled
Repository ๐Ÿค—https://huggingface.co/arnavgrg/codellama-70b-instruct-nf4-fp16-upscaled 
Model Size70b
Required VRAM138.7 GB
Updated2024-12-22
Maintainerarnavgrg
Model Typellama
Instruction-BasedYes
Model Files  4.7 GB: 1-of-29   4.7 GB: 2-of-29   5.0 GB: 3-of-29   5.0 GB: 4-of-29   4.7 GB: 5-of-29   4.7 GB: 6-of-29   4.7 GB: 7-of-29   5.0 GB: 8-of-29   5.0 GB: 9-of-29   4.7 GB: 10-of-29   4.7 GB: 11-of-29   4.7 GB: 12-of-29   5.0 GB: 13-of-29   5.0 GB: 14-of-29   4.7 GB: 15-of-29   4.7 GB: 16-of-29   4.7 GB: 17-of-29   5.0 GB: 18-of-29   5.0 GB: 19-of-29   4.7 GB: 20-of-29   4.7 GB: 21-of-29   4.7 GB: 22-of-29   5.0 GB: 23-of-29   5.0 GB: 24-of-29   4.7 GB: 25-of-29   4.7 GB: 26-of-29   4.7 GB: 27-of-29   5.0 GB: 28-of-29   3.8 GB: 29-of-29
Quantization Typefp16
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.37.0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32016
Torch Data Typefloat16

Best Alternatives to Codellama 70B Instruct Nf4 Fp16 Upscaled

Best Alternatives
Context / RAM
Downloads
Likes
...Llama 70B Instruct Hf 4bit MLX4K / 39.1 GB2425
...70B Instruct Hf 5.0bpw H6 EXL22K / 43.6 GB76
...0B Instruct Hf 2.65bpw H6 EXL22K / 23.4 GB93
...70B Instruct Hf 2.4bpw H6 EXL22K / 21.3 GB101
...70B Instruct Hf 4.0bpw H6 EXL22K / 35.1 GB121
CodeLlama 70B Instruct Hf4K / 72.3 GB4291204
Code Llama 70B Python Instruct4K / 138.1 GB821
CodeLlama 70B Instruct Hf4K / 72.3 GB28616
CodeLlama 70B Instruct Neuron4K /  GB4251
CodeLlama 70B Instruct Hf GGUF4K / 25.5 GB1742
Note: green Score (e.g. "73.2") means that the model is better than arnavgrg/codellama-70b-instruct-nf4-fp16-upscaled.

Rank the Codellama 70B Instruct Nf4 Fp16 Upscaled Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217