Codellama 70B Instruct Nf4 Fp16 Upscaled by arnavgrg

 ยป  All LLMs  ยป  arnavgrg  ยป  Codellama 70B Instruct Nf4 Fp16 Upscaled   URL Share it on

  Autotrain compatible   Codegen   Conversational   Endpoints compatible   Fp16   Instruct   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Codellama 70B Instruct Nf4 Fp16 Upscaled Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Codellama 70B Instruct Nf4 Fp16 Upscaled Parameters and Internals

LLM NameCodellama 70B Instruct Nf4 Fp16 Upscaled
Repository ๐Ÿค—https://huggingface.co/arnavgrg/codellama-70b-instruct-nf4-fp16-upscaled 
Model Size70b
Required VRAM138.7 GB
Updated2024-10-18
Maintainerarnavgrg
Model Typellama
Instruction-BasedYes
Model Files  4.7 GB: 1-of-29   4.7 GB: 2-of-29   5.0 GB: 3-of-29   5.0 GB: 4-of-29   4.7 GB: 5-of-29   4.7 GB: 6-of-29   4.7 GB: 7-of-29   5.0 GB: 8-of-29   5.0 GB: 9-of-29   4.7 GB: 10-of-29   4.7 GB: 11-of-29   4.7 GB: 12-of-29   5.0 GB: 13-of-29   5.0 GB: 14-of-29   4.7 GB: 15-of-29   4.7 GB: 16-of-29   4.7 GB: 17-of-29   5.0 GB: 18-of-29   5.0 GB: 19-of-29   4.7 GB: 20-of-29   4.7 GB: 21-of-29   4.7 GB: 22-of-29   5.0 GB: 23-of-29   5.0 GB: 24-of-29   4.7 GB: 25-of-29   4.7 GB: 26-of-29   4.7 GB: 27-of-29   5.0 GB: 28-of-29   3.8 GB: 29-of-29
Quantization Typefp16
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.37.0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32016
Torch Data Typefloat16
Codellama 70B Instruct Nf4 Fp16 Upscaled (arnavgrg/codellama-70b-instruct-nf4-fp16-upscaled)

Best Alternatives to Codellama 70B Instruct Nf4 Fp16 Upscaled

Best Alternatives
Context / RAM
Downloads
Likes
...Llama 70B Instruct Hf 4bit MLX4K / 39.1 GB525
...70B Instruct Hf 5.0bpw H6 EXL22K / 43.6 GB56
...0B Instruct Hf 2.65bpw H6 EXL22K / 23.4 GB03
...70B Instruct Hf 2.4bpw H6 EXL22K / 21.3 GB21
...70B Instruct Hf 4.0bpw H6 EXL22K / 35.1 GB51
CodeLlama 70B Instruct Hf4K / 72.3 GB14035202
Code Llama 70B Python Instruct4K / 138.1 GB681
CodeLlama 70B Instruct Hf4K / 72.3 GB193713
CodeLlama 70B Instruct GPTQ4K / 35.3 GB104612
CodeLlama 70B Instruct Hf GGUF4K / 25.5 GB6802
Note: green Score (e.g. "73.2") means that the model is better than arnavgrg/codellama-70b-instruct-nf4-fp16-upscaled.

Rank the Codellama 70B Instruct Nf4 Fp16 Upscaled Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 36966 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024072803