CodeFuse CodeLlama 34B by codefuse-ai

 ยป  All LLMs  ยป  codefuse-ai  ยป  CodeFuse CodeLlama 34B   URL Share it on

  Arxiv:2311.02303   Autotrain compatible   Code   Codegen Dataset:codefuse-ai/codeexerci... Dataset:codefuse-ai/evol-instr...   En   Endpoints compatible   Instruct   Llama   Pytorch   Region:us   Safetensors   Sharded   Tensorflow   Zh

CodeFuse CodeLlama 34B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
CodeFuse CodeLlama 34B (codefuse-ai/CodeFuse-CodeLlama-34B)

CodeFuse CodeLlama 34B Parameters and Internals

Model Type 
code-generation
Additional Notes 
The model has been finetuned with a context length of 4K, which can be extended to 16K if necessary. A 4-bit quantized version is available to accommodate hardware constraints.
Training Details 
Data Volume:
600k instructions/answers
Methodology:
QLoRA fine-tuning
Context Length:
4000
Input Output 
Input Format:
Concatenated string formed by combining conversation data in training data format.
LLM NameCodeFuse CodeLlama 34B
Repository ๐Ÿค—https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B 
Model Size34b
Required VRAM67.5 GB
Updated2025-05-21
Maintainercodefuse-ai
Model Typellama
Instruction-BasedYes
Model Files  9.8 GB: 1-of-7   9.7 GB: 2-of-7   9.7 GB: 3-of-7   9.7 GB: 4-of-7   9.7 GB: 5-of-7   9.7 GB: 6-of-7   9.2 GB: 7-of-7   9.8 GB: 1-of-7   9.7 GB: 2-of-7   9.7 GB: 3-of-7   9.7 GB: 4-of-7   9.7 GB: 5-of-7   9.7 GB: 6-of-7   9.2 GB: 7-of-7
Supported Languagesen zh
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length16384
Model Max Length16384
Transformers Version4.32.0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Padding Token<unk>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typebfloat16

Quantized Models of the CodeFuse CodeLlama 34B

Model
Likes
Downloads
VRAM
CodeFuse CodeLlama 34B GGUF2079514 GB
CodeFuse CodeLlama 34B 4bits281119 GB
CodeFuse CodeLlama 34B AWQ2518 GB
CodeFuse CodeLlama 34B GPTQ92017 GB

Best Alternatives to CodeFuse CodeLlama 34B

Best Alternatives
Context / RAM
Downloads
Likes
...gpt 32K Codellama 34B Instruct32K / 67.5 GB182
CodeLlama 34B Instruct Hf16K / 67.5 GB12354287
Speechless Codellama 34B V2.016K / 67.5 GB817
CodeLlama 34B Instruct Hf16K / 67.5 GB157717
Speechless Codellama 34B V1.916K / 67.5 GB70
XAgentLLaMa 34B Preview16K / 157.3 GB53
MathCoder CL 34B16K / 67.5 GB103
CodeLlama 34B Instruct Hf16K / 1.4 GB93
...gpt 16K Codellama 34B Instruct16K / 67.5 GB34
CodeLlama 34B Instruct Fp1616K / 67.5 GB127
Note: green Score (e.g. "73.2") means that the model is better than codefuse-ai/CodeFuse-CodeLlama-34B.

Rank the CodeFuse CodeLlama 34B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 47499 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227