Stablecode Completion Alpha 3B 4K GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Stablecode Completion Alpha 3B 4K GPTQ   URL Share it on

  Arxiv:1910.02054   Arxiv:2104.09864   4-bit   Autotrain compatible Base model:quantized:stability... Base model:stabilityai/stablec...   Code   Codegen   Dataset:bigcode/starcoderdata   Gpt neox   Gptq   Model-index   Quantized   Region:us   Safetensors

Stablecode Completion Alpha 3B 4K GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Stablecode Completion Alpha 3B 4K GPTQ (TheBloke/stablecode-completion-alpha-3b-4k-GPTQ)

Stablecode Completion Alpha 3B 4K GPTQ Parameters and Internals

Model Type 
causal-lm, text-generation
Use Cases 
Areas:
Research, Commercial applications
Applications:
Code completion
Primary Use Cases:
Single/multiline code completion from a long context window up to 4k tokens.
Limitations:
Not intended for unlawful content or activities with high risk of harm.
Considerations:
Use in conjunction with tools like HuggingFace's VSCode extension for responsible usage.
Additional Notes 
TheBloke's LLM work is supported by andreessen horowitz (a16z) grant. Multiple quantisation parameters provided allowing for various hardware compatibility.
Supported Languages 
code (Programming languages that topped the StackOverflow developer survey.)
Training Details 
Data Sources:
bigcode/starcoder-data
Data Volume:
300 billion tokens
Methodology:
Pre-trained at a context length of 4096 for 300 billion tokens
Context Length:
4096
Model Architecture:
Auto-regressive language models based on transformer decoder architecture, trained under 2D parallelism with ZeRO-1 and uses rotary embedding kernels.
Input Output 
Input Format:
Tokens up to 4k
Accepted Modalities:
Text
Output Format:
Generated code
LLM NameStablecode Completion Alpha 3B 4K GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ 
Model CreatorStabilityAI
Base Model(s)  ...blecode Completion Alpha 3B 4K   stabilityai/stablecode-completion-alpha-3b-4k
Model Size3b
Required VRAM1.8 GB
Updated2025-02-05
MaintainerTheBloke
Model Typegpt_neox
Model Files  1.8 GB
Supported Languagescode
GPTQ QuantizationYes
Quantization Typegptq
Generates CodeYes
Model ArchitectureGPTNeoXForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.30.2
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size49152
Torch Data Typefloat16

Best Alternatives to Stablecode Completion Alpha 3B 4K GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
...blecode Instruct Alpha 3B GPTQ4K / 1.8 GB1717
...blecode Instruct Alpha 3B GPTQ4K / 1.8 GB61
Stablecode Completion Alpha 3B16K / 14.1 GB120116
...blecode Completion Alpha 3B 4K4K / 6.1 GB1436282
Stablecode Instruct Alpha 3B4K / 6.1 GB35304
StableCode 3B4K / 6.1 GB141
...tion Alpha 3B 4K Openvino Int84K / 2.8 GB231
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/stablecode-completion-alpha-3b-4k-GPTQ.

Rank the Stablecode Completion Alpha 3B 4K GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42625 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227