Helion 4x34B GPTQ by TheBloke

 »  All LLMs  »  TheBloke  »  Helion 4x34B GPTQ   URL Share it on

  4-bit   Autotrain compatible   Base model:weyaxi/helion-4x34b   Conversational   Gptq   License:other   Mixtral   Moe   Quantized   Region:us   Safetensors   Sharded   Tensorflow   Yi

Rank the Helion 4x34B GPTQ Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Helion 4x34B GPTQ (TheBloke/Helion-4x34B-GPTQ)

Best Alternatives to Helion 4x34B GPTQ

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Shqiponja 15B V2 8bit32K / 16.3 GB3681
Shqiponja 15B V132K / 31.3 GB03

Helion 4x34B GPTQ Parameters and Internals

LLM NameHelion 4x34B GPTQ
RepositoryOpen on 🤗 
Model NameHelion 4X34B
Model CreatorYağız Çalık
Base Model(s)  Helion 4x34B   Weyaxi/Helion-4x34B
Model Size15b
Required VRAM58.3 GB
Updated2024-07-07
MaintainerTheBloke
Model Typemixtral
Model Files  10.0 GB: 1-of-6   10.0 GB: 2-of-6   9.9 GB: 3-of-6   10.0 GB: 4-of-6   9.9 GB: 5-of-6   8.5 GB: 6-of-6
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureMixtralForCausalLM
Licenseother
Context Length200000
Model Max Length200000
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size64000
Initializer Range0.02
Torch Data Typefloat16

What open-source LLMs or SLMs are you in search of? 34531 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801