Tenebra 30B Alpha01 EXL2 3bpw by SicariusSicariiStuff

 Β»  All LLMs  Β»  SicariusSicariiStuff  Β»  Tenebra 30B Alpha01 EXL2 3bpw   URL Share it on

  Autotrain compatible   En   Endpoints compatible   Exl2   Llama   Quantized   Region:us   Sharded   Tensorflow

Tenebra 30B Alpha01 EXL2 3bpw Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tenebra 30B Alpha01 EXL2 3bpw (SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_3bpw)

Tenebra 30B Alpha01 EXL2 3bpw Parameters and Internals

Model Type 
experimental, text-based
Additional Notes 
Tenebră is an experimental AI model that focuses on complex and philosophical discussions using unconventional datasets.
LLM NameTenebra 30B Alpha01 EXL2 3bpw
Repository πŸ€—https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_3bpw 
Model Size30b
Required VRAM12.7 GB
Updated2025-02-22
MaintainerSicariusSicariiStuff
Model Typellama
Model Files  8.6 GB: 1-of-2   4.1 GB: 2-of-2
Supported Languagesen
Quantization Typeexl2
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length16384
Model Max Length16384
Transformers Version4.36.0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Tenebra 30B Alpha01 EXL2 3bpw

Best Alternatives
Context / RAM
Downloads
Likes
Tenebra 30B Alpha01 FP1616K / 65 GB17928
Tenebra 30B Alpha01 4BIT16K / 19.4 GB18961
...nebra 30B Alpha01 EXL2 2 80bpw16K / 11.9 GB121
...nebra 30B Alpha01 EXL2 2 50bpw16K / 10.7 GB60
Tenebra 30B Alpha01 3BIT16K / 12.9 GB90
... 30B Supercot SuperHOT 8K Fp168K / 65.2 GB20724
Platypus 30B SuperHOT 8K Fp168K / 65.2 GB20712
GPlatty 30B SuperHOT 8K Fp168K / 65.2 GB20611
Tulu 30B Fp162K / 65.2 GB20575
WizardLM 30B Fp162K / 65.2 GB205810
Note: green Score (e.g. "73.2") means that the model is better than SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_3bpw.

Rank the Tenebra 30B Alpha01 EXL2 3bpw Capabilities

πŸ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227