Nvidia CodeLlama 7B Instruct Bf16 Sharded by avemio-digital

 ยป  All LLMs  ยป  avemio-digital  ยป  Nvidia CodeLlama 7B Instruct Bf16 Sharded   URL Share it on

  Autotrain compatible   Codegen   Custom code   Endpoints compatible   Instruct   Llama   Pytorch   Region:us   Sharded

Nvidia CodeLlama 7B Instruct Bf16 Sharded Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Nvidia CodeLlama 7B Instruct Bf16 Sharded Parameters and Internals

LLM NameNvidia CodeLlama 7B Instruct Bf16 Sharded
Repository ๐Ÿค—https://huggingface.co/avemio-digital/Nvidia_CodeLlama-7B-Instruct-bf16-sharded 
Model Size7b
Required VRAM13.5 GB
Updated2024-09-19
Maintaineravemio-digital
Model Typellama
Instruction-BasedYes
Model Files  10.0 GB: 1-of-2   3.5 GB: 2-of-2
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Context Length16384
Model Max Length16384
Transformers Version4.35.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32016
Torch Data Typefloat16
Nvidia CodeLlama 7B Instruct Bf16 Sharded (avemio-digital/Nvidia_CodeLlama-7B-Instruct-bf16-sharded)

Quantized Models of the Nvidia CodeLlama 7B Instruct Bf16 Sharded

Model
Likes
Downloads
VRAM
...Llama 7B Instruct Bf16 Sharded11213 GB

Best Alternatives to Nvidia CodeLlama 7B Instruct Bf16 Sharded

Best Alternatives
Context / RAM
Downloads
Likes
Deepseek Coder 6.7B Instruct16K / 13.5 GB91934335
CodeLlama 7B Instruct Hf16K / 13.5 GB56533206
CodeLlama 7B Instruct Hf16K / 13.5 GB1048324
GetCode Slerp16K / 13.6 GB6731
Cdlm 7 Ko Nl2sql V1.016K / 13.5 GB22604
Stack Codellama 7B Inst16K / 13.5 GB650
...ama 7B Instruct Hf Dequantized16K / 13.5 GB1970
...deLlama 7B Instruct Hf 8bits Q16K / 4.2 GB21
Llama 2 7B Chat Finetune516K / 13.8 GB21
CodeLLama SFT FILTERED16K / 13.5 GB50
Note: green Score (e.g. "73.2") means that the model is better than avemio-digital/Nvidia_CodeLlama-7B-Instruct-bf16-sharded.

Rank the Nvidia CodeLlama 7B Instruct Bf16 Sharded Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 36026 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024072803