Qwen1.5 110B Chat 3.35bpw H6 EXL2 by yiximail

 ยป  All LLMs  ยป  yiximail  ยป  Qwen1.5 110B Chat 3.35bpw H6 EXL2   URL Share it on

  Arxiv:2309.16609   Autotrain compatible   Chat   Conversational   En   Endpoints compatible   Exl2   Quantized   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

Qwen1.5 110B Chat 3.35bpw H6 EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen1.5 110B Chat 3.35bpw H6 EXL2 (yiximail/Qwen1.5-110B-Chat-3.35bpw-h6-exl2)

Qwen1.5 110B Chat 3.35bpw H6 EXL2 Parameters and Internals

Model Type 
text-generation
Training Details 
Methodology:
Pretrained with supervised finetuning and direct preference optimization.
Context Length:
32768
Model Architecture:
Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention.
Input Output 
Performance Tips:
If you encounter code switching or other bad cases, use provided hyper-parameters in `generation_config.json`.
LLM NameQwen1.5 110B Chat 3.35bpw H6 EXL2
Repository ๐Ÿค—https://huggingface.co/yiximail/Qwen1.5-110B-Chat-3.35bpw-h6-exl2 
Model Size110b
Required VRAM49 GB
Updated2024-12-26
Maintaineryiximail
Model Typeqwen2
Model Files  8.6 GB: 1-of-6   8.4 GB: 2-of-6   8.5 GB: 3-of-6   8.6 GB: 4-of-6   8.6 GB: 5-of-6   6.3 GB: 6-of-6
Supported Languagesen
Quantization Typeexl2
Model ArchitectureQwen2ForCausalLM
Licenseother
Context Length32768
Model Max Length32768
Transformers Version4.37.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Qwen1.5 110B Chat 3.35bpw H6 EXL2

Best Alternatives
Context / RAM
Downloads
Likes
Qwen1.5 110B Chat 4bit32K / 62.2 GB115
Qwen1.5 110B Chat 8bit32K / 179.8 GB141
...n1.5 110B Chat 3.25bpw H6 EXL232K / 47.7 GB111
Qwen1.5 110B 4bit8K / 62.2 GB121
Qwen1.5 110B Chat32K / 158.3 GB6356123
Qwen1.5 110B32K / 221.7 GB311193
Airoboros 110B 3.332K / 158.3 GB3582
Qwen1.5 110B Chat AWQ32K / 61.7 GB618
Qwen1.5 110B Chat GGUF32K / 3.1 GB2932
Dolphin 2.9.1 Qwen 110B32K / 193.4 GB4626
Note: green Score (e.g. "73.2") means that the model is better than yiximail/Qwen1.5-110B-Chat-3.35bpw-h6-exl2.

Rank the Qwen1.5 110B Chat 3.35bpw H6 EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40303 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227