CodeLlaMa 7B Dare Ties by yrju

 ยป  All LLMs  ยป  yrju  ยป  CodeLlaMa 7B Dare Ties   URL Share it on

  Merged Model   Arxiv:2306.01708   Arxiv:2311.03099   Autotrain compatible Base model:codellama/codellama... Base model:codellama/codellama... Base model:meta-llama/llama-2-...   Codegen   En   Endpoints compatible   Instruct   Llama   Region:us   Safetensors   Sharded   Tensorflow

CodeLlaMa 7B Dare Ties Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
CodeLlaMa 7B Dare Ties (yrju/CodeLlaMa-7B-dare-ties)

CodeLlaMa 7B Dare Ties Parameters and Internals

Additional Notes 
This is a merge of pre-trained language models created using mergekit. The model was merged using the DARE TIES merge method.
Supported Languages 
en (supported)
LLM NameCodeLlaMa 7B Dare Ties
Repository ๐Ÿค—https://huggingface.co/yrju/CodeLlaMa-7B-dare-ties 
Base Model(s)  Llama 2 7B Hf   CodeLlama 7B Instruct Hf   CodeLlama 7B Python Hf   meta-llama/Llama-2-7b-hf   codellama/CodeLlama-7b-Instruct-hf   codellama/CodeLlama-7b-Python-hf
Merged ModelYes
Model Size7b
Required VRAM13.6 GB
Updated2025-02-22
Maintaineryrju
Model Typellama
Instruction-BasedYes
Model Files  0.9 GB: 1-of-14   1.0 GB: 2-of-14   1.0 GB: 3-of-14   1.0 GB: 4-of-14   1.0 GB: 5-of-14   0.9 GB: 6-of-14   1.0 GB: 7-of-14   1.0 GB: 8-of-14   1.0 GB: 9-of-14   1.0 GB: 10-of-14   0.9 GB: 11-of-14   1.0 GB: 12-of-14   1.0 GB: 13-of-14   0.9 GB: 14-of-14
Supported Languagesen
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.41.1
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to CodeLlaMa 7B Dare Ties

Best Alternatives
Context / RAM
Downloads
Likes
Deepseek Coder 6.7B Instruct16K / 13.5 GB46744377
CodeLlama 7B Instruct Hf16K / 13.5 GB43565224
CodeLlama 7B Instruct Hf16K / 13.5 GB3015640
GetCode Slerp16K / 13.6 GB17531
CodeV CL 7B16K / 13.5 GB731
Cdlm 7 Ko Nl2sql V1.016K / 13.5 GB20324
Stack Codellama 7B Inst16K / 13.5 GB280
Glaive Coder 7B16K / 27 GB207854
...ama 7B Instruct Hf Dequantized16K / 13.5 GB1970
...deLlama 7B Instruct Hf 8bits Q16K / 4.2 GB191
Note: green Score (e.g. "73.2") means that the model is better than yrju/CodeLlaMa-7B-dare-ties.

Rank the CodeLlaMa 7B Dare Ties Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227