Speechless Codellama 34B V2.0 GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Speechless Codellama 34B V2.0 GPTQ   URL Share it on

  Arxiv:2308.12950   4-bit   Autotrain compatible Base model:quantized:uukuguy/s... Base model:uukuguy/speechless-...   Code   Codegen Dataset:garage-baind/open-plat... Dataset:jondurbin/airoboros-2....   Dataset:open-orca/openorca Dataset:wizardlm/wizardlm evol...   En   Gptq   Instruct   Llama   Llama2   Model-index   Quantized   Region:us   Safetensors

Speechless Codellama 34B V2.0 GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Speechless Codellama 34B V2.0 GPTQ (TheBloke/speechless-codellama-34b-v2.0-GPTQ)

Speechless Codellama 34B V2.0 GPTQ Parameters and Internals

Model Type 
llama, code
Use Cases 
Areas:
commercial, research
Applications:
code synthesis, code understanding, Python programming, code assistant, generation applications
Considerations:
Use in any manner that violates applicable laws or regulations is not allowed. Use in languages other than English is also a limitation.
Supported Languages 
en (English)
Training Details 
Data Sources:
jondurbin/airoboros-2.2, Open-Orca/OpenOrca, garage-bAInd/Open-Platypus, WizardLM/WizardLM_evol_instruct_V2_196k
Data Volume:
153,013 samples
Training Time:
1 day, 14:42:57.87
Hardware Used:
H800-80G x 2
Model Architecture:
auto-regressive transformer
Input Output 
Input Format:
{prompt}
Accepted Modalities:
text
Output Format:
text
Performance Tips:
The model's performance can vary based on GPTQ parameters such as bit size, group size, damp percentage, and dataset.
Release Notes 
Version:
2.0
Notes:
Multiple quantisation parameters provided for optimising performance based on hardware configuration. GPTQ parameters are detailed for user reference to enhance inference accuracy.
LLM NameSpeechless Codellama 34B V2.0 GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GPTQ 
Model NameSpeechless Codellama 34B v2.0
Model CreatorJiangwen Su
Base Model(s)  uukuguy/speechless-codellama-34b-v2.0   uukuguy/speechless-codellama-34b-v2.0
Model Size34b
Required VRAM17.7 GB
Updated2024-12-22
MaintainerTheBloke
Model Typellama
Instruction-BasedYes
Model Files  17.7 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length16384
Model Max Length16384
Transformers Version4.34.0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Speechless Codellama 34B V2.0 GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
CodeLlama 34B Instruct GPTQ16K / 18.3 GB13974
CodeLlama 34B Instruct Fp1616K / 67.5 GB19847
... Uncensored CodeLlama 34B GPTQ16K / 17.7 GB317
...gpt 32K Codellama 34B Instruct32K / 67.5 GB4812
CodeLlama 34B Instruct Hf16K / 67.5 GB45712280
Speechless Codellama 34B V2.016K / 67.5 GB116617
CodeLlama 34B Instruct Hf16K / 67.5 GB212214
Speechless Codellama 34B V1.916K / 67.5 GB11660
XAgentLLaMa 34B Preview16K / 157.3 GB143
MathCoder CL 34B16K / 67.5 GB4162
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/speechless-codellama-34b-v2.0-GPTQ.

Rank the Speechless Codellama 34B V2.0 GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217