Stable Code 3B GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Stable Code 3B GGUF   URL Share it on

  Arxiv:1910.02054   Arxiv:2104.09864   Arxiv:2204.06745   Arxiv:2305.06161   Arxiv:2307.09288   Arxiv:2309.12284   Arxiv:2310.10631 Base model:quantized:stability... Base model:stabilityai/stable-...   Code   Dataset:bigcode/commitpackft   Dataset:bigcode/starcoderdata Dataset:bigcode/the-stack-gith... Dataset:eleutherai/proof-pile-...   Dataset:meta-math/metamathqa Dataset:tiiuae/falcon-refinedw...   En   Gguf   Model-index   Quantized   Region:us   Stablelm epoch

Stable Code 3B GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Stable Code 3B GGUF (TheBloke/stable-code-3b-GGUF)

Stable Code 3B GGUF Parameters and Internals

Model Type 
auto-regressive, decoder-only, text-generation
Use Cases 
Areas:
Research, Commercial applications
Applications:
Code generation across various programming languages
Primary Use Cases:
Filling in the middle of code, Generating text based on input prompts
Limitations:
May exhibit unreliable or unsafe behavior without fine-tuning
Considerations:
Developers should ensure safe performance through fine-tuning for specific applications.
Supported Languages 
English (high proficiency)
Training Details 
Data Sources:
tiiuae/falcon-refinedweb, bigcode/the-stack-github-issues, bigcode/commitpackft, bigcode/starcoderdata, EleutherAI/proof-pile-2, meta-math/MetaMathQA
Data Volume:
1.3 trillion tokens
Methodology:
Pre-trained with AdamW optimizer using bfloat16 precision with flash attention and Rotary Embeddings.
Context Length:
16384
Hardware Used:
256 NVIDIA A100 40GB GPUs
Model Architecture:
Decoder-only transformer, similar to LLaMA with rotary position embeddings.
Input Output 
Input Format:
Raw text or code prompts.
Accepted Modalities:
text
Output Format:
Generated text or code
LLM NameStable Code 3B GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/stable-code-3b-GGUF 
Model NameStable Code 3B
Model CreatorStability AI
Base Model(s)  Stable Code 3B   stabilityai/stable-code-3b
Model Size3b
Required VRAM1.1 GB
Updated2025-03-12
MaintainerTheBloke
Model Typestablelm_epoch
Model Files  1.1 GB   1.5 GB   1.4 GB   1.2 GB   1.6 GB   1.7 GB   1.6 GB   1.9 GB   2.0 GB   1.9 GB   2.3 GB   3.0 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licenseother

Best Alternatives to Stable Code 3B GGUF

Best Alternatives
Context / RAM
Downloads
Likes
MedQwen3B Reasoner0K / 6.2 GB127211
Llama 3.2 3B VanRossum0K / 6.5 GB7210
Llama Deepsync 3B GGUF0K / 2 GB332113
Llama Chat Summary 3.2 3B GGUF0K / 2 GB99412
QwQ LCoT 3B Instruct GGUF0K / 1.9 GB90416
...a Song Stream 3B Instruct GGUF0K / 2 GB72715
...ma Doctor 3.2 3B Instruct GGUF0K / 2 GB155316
... Sentient 3.2 3B Instruct GGUF0K / 2 GB98514
...ma Magpie 3.2 3B Instruct GGUF0K / 2 GB79611
Deepsync 240 GGUF0K / 2 GB8911
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/stable-code-3b-GGUF.

Rank the Stable Code 3B GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 44902 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227