Sorceroboros 33B S2a4 Gptq by chargoddard

 ยป  All LLMs  ยป  chargoddard  ยป  Sorceroboros 33B S2a4 Gptq   URL Share it on

  Merged Model   4bit   Autotrain compatible   Custom code Dataset:ehartford/wizard vicun... Dataset:ehartford/wizardlm evo... Dataset:jondurbin/airoboros-gp... Dataset:openai/summarize from ...   En   Endpoints compatible   Gptq   Instruct   Llama   Quantized   Region:us

Sorceroboros 33B S2a4 Gptq Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Sorceroboros 33B S2a4 Gptq (chargoddard/sorceroboros-33b-s2a4-gptq)

Sorceroboros 33B S2a4 Gptq Parameters and Internals

Model Type 
text generation
Additional Notes 
Ensure both `compress_pos_emb` (or `scale`) is set to 2, and `alpha_value` is set to 4 when loading.
Supported Languages 
en (English)
Training Details 
Data Sources:
ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split, jondurbin/airoboros-gpt4-1.4.1, openai/summarize_from_feedback, ehartford/wizard_vicuna_70k_unfiltered
Methodology:
This model was trained using both linear and NTK-aware RoPE scaling in tandem.
Context Length:
8192
Input Output 
Input Format:
vicuna 1.1
LLM NameSorceroboros 33B S2a4 Gptq
Repository ๐Ÿค—https://huggingface.co/chargoddard/sorceroboros-33b-s2a4-gptq 
Merged ModelYes
Model Size33b
Required VRAM17.6 GB
Updated2025-02-22
Maintainerchargoddard
Model Typellama
Instruction-BasedYes
Model Files  17.6 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq|4bit
Model ArchitectureLlamaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.30.0.dev0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Sorceroboros 33B S2a4 Gptq

Best Alternatives
Context / RAM
Downloads
Likes
...epseek Coder 33B Instruct GPTQ16K / 17.4 GB46726
Med Orca Instruct 33B GPTQ2K / 17.6 GB61
...er 33B Instruct 4.0bpw H6 EXL216K / 17.1 GB115
...er 33B Instruct 8.0bpw H8 EXL216K / 33.6 GB83
...r 33B Instruct 4.65bpw H6 EXL216K / 19.8 GB91
...er 33B Instruct 3.0bpw H6 EXL216K / 13 GB71
...er 33B Instruct 5.0bpw H6 EXL216K / 21.2 GB51
...ardLM 33B V1.0 Uncensored GPTQ2K / 16.9 GB11142
Deepseek Coder 33B Instruct16K / 66.5 GB38385496
Deepseek Wizard 33B Slerp16K / 35.3 GB100
Note: green Score (e.g. "73.2") means that the model is better than chargoddard/sorceroboros-33b-s2a4-gptq.

Rank the Sorceroboros 33B S2a4 Gptq Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43508 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227