Python Code 13B GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Python Code 13B GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:ajibawa-2023/python... Base model:quantized:ajibawa-2...   Code Dataset:ajibawa-2023/python-co...   En   Gptq   Llama   Quantized   Region:us   Safetensors

Python Code 13B GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Python Code 13B GPTQ (TheBloke/Python-Code-13B-GPTQ)

Python Code 13B GPTQ Parameters and Internals

Model Type 
llama
Additional Notes 
Quantized using hardware provided by Massed Compute. The original model and quantized versions are available for GPU inference.
Supported Languages 
en (High)
Training Details 
Data Sources:
ajibawa-2023/Python-Code-23k-ShareGPT
Data Volume:
23000+ sets of codes, each having 2 conversations
Training Time:
13 hours (3 epochs)
Hardware Used:
Azure 4 x A100 80GB
Input Output 
Input Format:
This is a conversation with your helpful AI assistant. AI assistant can generate Python Code along with necessary explanation. Context You are a helpful AI assistant. USER: {prompt} ASSISTANT:
LLM NamePython Code 13B GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/Python-Code-13B-GPTQ 
Model NamePython Code 13B
Model CreatorFeynman Innovations
Base Model(s)  Python Code 13B   ajibawa-2023/Python-Code-13B
Model Size13b
Required VRAM7.3 GB
Updated2025-02-22
MaintainerTheBloke
Model Typellama
Model Files  7.3 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Licensecc-by-nc-nd-4.0
Context Length4096
Model Max Length4096
Transformers Version4.35.0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Python Code 13B GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
Yarn Llama 2 13B 128K GPTQ128K / 7.3 GB3816
LongAlign 13B 64K GPTQ64K / 7.3 GB841
...boros L2 13B 2 1 YaRN 64K GPTQ64K / 7.3 GB183
Yarn Llama 2 13B 64K GPTQ64K / 7.3 GB421
OrcaMaid V3 13B 32K GPTQ32K / 7.3 GB333
OrcaMaid V2 FIX 13B 32K GPTQ32K / 7.3 GB244
EverythingLM 13B 16K GPTQ16K / 7.3 GB3913
Tinybra 13B GPTQ 32g 4BIT16K / 8 GB731
Tinybra 13B GPTQ 4BIT16K / 7 GB720
WhiteRabbitNeo 13B GPTQ16K / 7.3 GB754
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Python-Code-13B-GPTQ.

Rank the Python Code 13B GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227