Starcoder2 7B AWQ by TechxGenus

 ยป  All LLMs  ยป  TechxGenus  ยป  Starcoder2 7B AWQ   URL Share it on

  Arxiv:2004.05150   Arxiv:2205.14135   Arxiv:2207.14255   Arxiv:2305.13245   4-bit   Autotrain compatible   Awq   Code Dataset:bigcode/the-stack-v2-t...   Endpoints compatible   Model-index   Quantized   Region:us   Safetensors   Starcoder2

Starcoder2 7B AWQ Benchmarks

Starcoder2 7B AWQ (TechxGenus/starcoder2-7b-AWQ)

Starcoder2 7B AWQ Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
Research, Code generation tasks
Applications:
Text generation for programming based tasks
Primary Use Cases:
Generating code snippets, Preliminary code suggestions
Limitations:
Generated code may be inefficient or contain bugs., It is not an instruction model, so specific prompts like 'Write a function that computes the square root.' may not work well.
Considerations:
The model should be used considering it might produce output that requires human validation due to possible bugs or inefficiencies.
Additional Notes 
Consider the ethical and attribution aspects when using outputs from this model.
Supported Languages 
programming_languages_supported ( The model was trained on 17 programming languages from The Stack v2)
Training Details 
Data Sources:
GitHub code, Arxiv, Wikipedia
Data Volume:
3.5+ trillion tokens
Methodology:
The model uses Grouped Query Attention and a Fill-in-the-Middle objective with a sliding window attention
Context Length:
16384
Hardware Used:
432 H100 GPUs
Model Architecture:
Transformer decoder with grouped-query and sliding window attention
Input Output 
Input Format:
Programming code prompts.
Accepted Modalities:
text
Output Format:
Generated text/code snippets
Performance Tips:
Use context-rich prompts for better quality responses.
LLM NameStarcoder2 7B AWQ
Repository ๐Ÿค—https://huggingface.co/TechxGenus/starcoder2-7b-AWQ 
Base Model(s)  Starcoder2 7B   bigcode/starcoder2-7b
Model Size7b
Required VRAM4.5 GB
Updated2025-05-17
MaintainerTechxGenus
Model Typestarcoder2
Model Files  4.5 GB
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureStarcoder2ForCausalLM
Licensebigcode-openrail-m
Context Length16384
Model Max Length16384
Transformers Version4.39.3
Tokenizer ClassGPT2Tokenizer
Vocabulary Size49152
Torch Data Typefloat16
Activation Functiongelu

Best Alternatives to Starcoder2 7B AWQ

Best Alternatives
Context / RAM
Downloads
Likes
...Starcoder2 7B Bnb 4bit Smashed16K / 4.4 GB50
Starcoder2 7B 4bit16K / 4.4 GB432
Starcoder2 7B16K / 14.4 GB56183177
StarCoder2 7B GGUF16K / 2.7 GB748413
Dolphincoder Starcoder2 7B16K / 14.9 GB6311
Starcoder2 7B Int4 Ov16K / 3.8 GB150
Jmg Starcoder2 7B 100K16K / 14.4 GB50
Starcoder2 7B Instruct GPTQ16K / 4.5 GB181
Starcoder2 7B GPTQ16K / 4.5 GB52
Speechless Starcoder2 7B16K / 14.4 GB105
Note: green Score (e.g. "73.2") means that the model is better than TechxGenus/starcoder2-7b-AWQ.

Rank the Starcoder2 7B AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 47402 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227