Dolphincoder Starcoder2 15B by cognitivecomputations

 ยป  All LLMs  ยป  cognitivecomputations  ยป  Dolphincoder Starcoder2 15B   URL Share it on

  Autotrain compatible   Conversational Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:ise-uiuc/magicoder-evo... Dataset:ise-uiuc/magicoder-oss... Dataset:jondurbin/airoboros-2....   Dataset:m-a-p/code-feedback Dataset:m-a-p/codefeedback-fil...   Dataset:teknium/openhermes   En   Endpoints compatible   Instruct   Pytorch   Region:us   Sharded   Starcoder2

Dolphincoder Starcoder2 15B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Dolphincoder Starcoder2 15B Parameters and Internals

Model Type 
code generation, programming assistant
Additional Notes 
The model is highly compliant and should be used with an alignment layer to prevent unethical uses.
Supported Languages 
en (fluent)
Training Details 
Data Sources:
cognitivecomputations/dolphin, jondurbin/airoboros-2.2.1, cognitivecomputations/dolphin-coder, teknium/openhermes, ise-uiuc/Magicoder-OSS-Instruct-75K, ise-uiuc/Magicoder-Evol-Instruct-110K, m-a-p/Code-Feedback, m-a-p/CodeFeedback-Filtered-Instruction
Methodology:
qLoRA and Axolotl
Training Time:
3 days
Hardware Used:
8x H100s
Responsible Ai Considerations 
Transparency:
Users must implement an alignment layer to ensure compliance with ethical standards.
Accountability:
Users are responsible for the model's outputs and should ensure compliance through an alignment layer.
Input Output 
Input Format:
ChatML prompt format
Accepted Modalities:
text
Output Format:
programs in various languages
Performance Tips:
Use an alignment layer to ensure compliance with ethical guidelines.
LLM NameDolphincoder Starcoder2 15B
Repository ๐Ÿค—https://huggingface.co/cognitivecomputations/dolphincoder-starcoder2-15b 
Model Size15b
Required VRAM31.9 GB
Updated2024-11-21
Maintainercognitivecomputations
Model Typestarcoder2
Instruction-BasedYes
Model Files  4.9 GB: 1-of-7   5.0 GB: 2-of-7   5.0 GB: 3-of-7   5.0 GB: 4-of-7   5.0 GB: 5-of-7   5.0 GB: 6-of-7   2.0 GB: 7-of-7
Supported Languagesen
Model ArchitectureStarcoder2ForCausalLM
Licensebigcode-openrail-m
Context Length16384
Model Max Length16384
Transformers Version4.39.0.dev0
Tokenizer ClassGPT2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size49154
Torch Data Typebfloat16
Dolphincoder Starcoder2 15B (cognitivecomputations/dolphincoder-starcoder2-15b)

Best Alternatives to Dolphincoder Starcoder2 15B

Best Alternatives
Context / RAM
Downloads
Likes
Starcoder2 15B Instruct V0.116K / 31.9 GB834998
Starcoder2 15B Instruct V0.116K / 53.4 GB110
Starcoder2 15B Instruct16K / 31.9 GB277
Starcoder2 15B Instruct GPTQ16K / 9.2 GB2599982
...rCoder2 15B Instruct V0.1 GGUF16K / 6.2 GB2120
...rCoder2 15B Instruct V0.1 GGUF16K / 6.2 GB960
Note: green Score (e.g. "73.2") means that the model is better than cognitivecomputations/dolphincoder-starcoder2-15b.

Rank the Dolphincoder Starcoder2 15B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 38199 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241110