30B Epsilon GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  30B Epsilon GPTQ   URL Share it on

  4-bit   Adventure   Alpaca   Autotrain compatible Base model:calderaai/30b-epsil... Base model:quantized:calderaai...   Cot   Gptq   Hippogriff   Instruct   Llama   Manticore   Merge   Mix   Quantized   Region:us   Roleplay   Rp   Safetensors   Story   Supercot   Superhot   Uncensored   Vicuna   Wizardlm

30B Epsilon GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
30B Epsilon GPTQ (TheBloke/30B-Epsilon-GPTQ)

30B Epsilon GPTQ Parameters and Internals

Model Type 
llama
Use Cases 
Areas:
text-based adventure, game development, roleplaying
Applications:
text-based adventure games, creative writing
Limitations:
None specified
Considerations:
No censorship applied, use carefully in open environments.
Additional Notes 
The model allows for extensive customization in chat environments using Alpaca's instruct format.
Training Details 
Methodology:
Use of LoRAs and model merges
Model Architecture:
Assembled from handpicked models and LoRAs
Input Output 
Input Format:
Instruction-based prompts
Output Format:
Text response following Alpaca's instruct format
LLM Name30B Epsilon GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/30B-Epsilon-GPTQ 
Model Name30B Epsilon
Model CreatorCaldera AI
Base Model(s)  30B Epsilon   CalderaAI/30B-Epsilon
Model Size30b
Required VRAM16.9 GB
Updated2025-02-23
MaintainerTheBloke
Model Typellama
Model Files  16.9 GB
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length2048
Model Max Length2048
Transformers Version4.28.1
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to 30B Epsilon GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
GPlatty 30B SuperHOT 8K GPTQ8K / 16.9 GB187
... 30B Supercot SuperHOT 8K GPTQ8K / 16.9 GB215
Platypus 30B SuperHOT 8K GPTQ8K / 16.9 GB184
Tulu 30B SuperHOT 8K GPTQ8K / 16.9 GB155
Yayi2 30B Llama GPTQ4K / 17 GB432
WizardLM 30B GPTQ2K / 16.9 GB205119
Llama 30B FINAL MODEL MINI2K / 19.4 GB71
...2 Llama 30B 7K Steps Gptq 2bit2K / 9.5 GB92
...Assistant SFT 7 Llama 30B GPTQ2K / 16.9 GB207235
WizardLM 30B V1.0 GPTQ2K / 16.9 GB51
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/30B-Epsilon-GPTQ.

Rank the 30B Epsilon GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43515 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227