CodeGemma 7B AWQ by TechxGenus

 ยป  All LLMs  ยป  TechxGenus  ยป  CodeGemma 7B AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq   Code   Endpoints compatible   Gemma   Quantized   Region:us   Safetensors   Sharded   Tensorflow

CodeGemma 7B AWQ Benchmarks

CodeGemma 7B AWQ (TechxGenus/CodeGemma-7b-AWQ)

CodeGemma 7B AWQ Parameters and Internals

Model Type 
text-generation, code generation
Use Cases 
Areas:
text generation, code generation
Primary Use Cases:
code generation, coding-related tasks
Limitations:
Model may produce errors, misleading contents, or struggle with non-coding tasks., Limited testing, requires additional safety evaluation.
Considerations:
Additional safety testing required for real-world deployment.
Supported Languages 
programming languages (Python)
Training Details 
Data Sources:
Gemma-7b base model , additional 0.7 billion high-quality, code-related tokens
Data Volume:
0.7 billion high-quality tokens
Methodology:
Fine-tuned with DeepSpeed ZeRO 3 and Flash Attention 2
Training Time:
3 epochs
Safety Evaluation 
Ethical Considerations:
Limited testing, additional safety testing recommended before real-world deployment.
Input Output 
Input Format:
Instruction format, Alpaca instruction format (excluding system prompt)
Accepted Modalities:
text
Output Format:
text
LLM NameCodeGemma 7B AWQ
Repository ๐Ÿค—https://huggingface.co/TechxGenus/CodeGemma-7b-AWQ 
Base Model(s)  Codegemma 7B   unsloth/codegemma-7b
Model Size7b
Required VRAM7.2 GB
Updated2025-02-22
MaintainerTechxGenus
Model Typegemma
Model Files  6.6 GB: 1-of-2   0.6 GB: 2-of-2
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureGemmaForCausalLM
Licenseother
Context Length8192
Model Max Length8192
Transformers Version4.39.0.dev0
Tokenizer ClassGemmaTokenizer
Padding Token<eos>
Vocabulary Size256000
Torch Data Typefloat16

Best Alternatives to CodeGemma 7B AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Codegemma 7B AWQ8K / 7.2 GB50
SeaLLM 7B V2.5 AWQ8K / 7.2 GB1282
Gemma 1.1 7B It AWQ8K / 7.2 GB770
SeaLLM 7B V2.5 AWQ8K / 5.6 GB90
Gemma Ko 7B AWQ8K / 5.6 GB870
Codegemma 1.1 7B It AWQ8K / 7.2 GB780
Gemma 7B It AWQ8K / 7.2 GB770
Gemma 7B It AWQ8K / 7.2 GB722
Gemma 7B AWQ8K / 7.2 GB690
...t Cleaner Gemma 32k Merged 16b31K / 17.1 GB50
Note: green Score (e.g. "73.2") means that the model is better than TechxGenus/CodeGemma-7b-AWQ.

Rank the CodeGemma 7B AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227