Codellama 13B Oasst Sft V10 by OpenAssistant

 ยป  All LLMs  ยป  OpenAssistant  ยป  Codellama 13B Oasst Sft V10   URL Share it on

  Autotrain compatible   Codegen   Custom code   Dataset:openassistant/oasst1   Dataset:shahules786/orca-best   En   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow

Codellama 13B Oasst Sft V10 Benchmarks

Codellama 13B Oasst Sft V10 (OpenAssistant/codellama-13b-oasst-sft-v10)

Codellama 13B Oasst Sft V10 Parameters and Internals

Model Type 
Causal decoder-only transformer language model
Additional Notes 
Demo and training logs available, showcasing 6123 steps with batch size 64. Partially trained with orca system messages.
Supported Languages 
en (English)
Training Details 
Data Sources:
OpenAssistant/oasst1, shahules786/orca-best
Methodology:
Fine-tuned using orca system messages and OpenAI's chatml standard prompt format.
Hardware Used:
EPFL's Machine Learning and Optimization Laboratory, EPFL's Natural Language Processing Lab
Model Architecture:
RoPE Theta value (1e6 instead of 1e4)
Responsible Ai Considerations 
Fairness:
Testing conducted in English and may not cover all scenarios, model may produce inaccurate, biased or objectionable responses.
Accountability:
Developers should perform safety testing and tuning for specific applications of the model.
Mitigation Strategies:
Refer to Meta's Responsible Use Guide
Input Output 
Input Format:
OpenAI's chatml standard format
Accepted Modalities:
text
Output Format:
text responses
Performance Tips:
Use the official Llama2 system message for optimal inference.
LLM NameCodellama 13B Oasst Sft V10
Repository ๐Ÿค—https://huggingface.co/OpenAssistant/codellama-13b-oasst-sft-v10 
Model Size13b
Required VRAM25.9 GB
Updated2025-02-05
MaintainerOpenAssistant
Model Typellama
Model Files  1.9 GB: 1-of-14   1.9 GB: 2-of-14   1.9 GB: 3-of-14   1.9 GB: 4-of-14   1.9 GB: 5-of-14   1.9 GB: 6-of-14   1.9 GB: 7-of-14   1.9 GB: 8-of-14   1.9 GB: 9-of-14   1.9 GB: 10-of-14   1.9 GB: 11-of-14   1.9 GB: 12-of-14   1.9 GB: 13-of-14   1.2 GB: 14-of-14
Supported Languagesen
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length16384
Model Max Length16384
Transformers Version4.31.0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32032
Torch Data Typebfloat16

Quantized Models of the Codellama 13B Oasst Sft V10

Model
Likes
Downloads
VRAM
...deLlama 13B Oasst Sft V10 GGUF149765 GB
...odeLlama 13B Oasst Sft V10 AWQ157 GB
...deLlama 13B Oasst Sft V10 GPTQ13117 GB
...deLlama 13B Oasst Sft V10 GGML695 GB

Best Alternatives to Codellama 13B Oasst Sft V10

Best Alternatives
Context / RAM
Downloads
Likes
NexusRaven V2 13B16K / 26 GB3904466
CodeLlama 13B Instruct Hf16K / 26 GB21714145
CodeLlama 13B MORepair16K / 26 GB262
CodeLlama 13B Hf16K / 26 GB7911103
CodeLlama 13B Hf16K / 26 GB75480
CodeLlama 13B Instruct Hf16K / 26 GB120520
CodeLlama 13B Python Hf16K / 26 GB265449
...ma 13B Hf Truncated Embeddings16K / 52.3 GB50
Tora Code 13B V1.016K / 26 GB125714
...ma Airoboros Orca Platypus 13B16K / 26 GB12780
Note: green Score (e.g. "73.2") means that the model is better than OpenAssistant/codellama-13b-oasst-sft-v10.

Rank the Codellama 13B Oasst Sft V10 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227