Zephyr 7B Alpha GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Zephyr 7B Alpha GPTQ   URL Share it on

  Arxiv:2305.18290   4-bit   Autotrain compatible Base model:huggingfaceh4/zephy... Base model:quantized:huggingfa...   Dataset:openbmb/ultrafeedback   Dataset:stingning/ultrachat   En   Generated from trainer   Gptq   Mistral   Quantized   Region:us   Safetensors

Zephyr 7B Alpha GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Zephyr 7B Alpha GPTQ (TheBloke/zephyr-7B-alpha-GPTQ)

Zephyr 7B Alpha GPTQ Parameters and Internals

Model Type 
GPT-like, fine-tuned
Use Cases 
Areas:
research, educational purposes
Applications:
chat applications
Primary Use Cases:
acting as a helpful assistant, chat generation
Limitations:
likely to generate problematic text when prompted, not aligned to human preferences like ChatGPT
Considerations:
For research and educational purposes due to potential generation of harmful content.
Additional Notes 
Trained with methodologies that do not strictly align outputs to human preferences.
Supported Languages 
en (Primary Language)
Training Details 
Data Sources:
stingning/ultrachat, openbmb/UltraFeedback
Methodology:
Fine-tuned using Direct Preference Optimization (DPO)
Model Architecture:
7B parameter GPT-like model
Responsible Ai Considerations 
Fairness:
The model has not been specifically aligned to human preferences with RLHF or in-the-loop filtering, which may affect fairness.
Transparency:
The model can produce problematic outputs if prompted to do so.
Accountability:
Not provided.
Mitigation Strategies:
The model uses Direct Preference Optimization techniques to improve responses within its training process.
Input Output 
Input Format:
<|system|> ~~ <|user|> {prompt}~~ <|assistant|>
Accepted Modalities:
text
Output Format:
generated text in response to input prompt
LLM NameZephyr 7B Alpha GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ 
Model NameZephyr 7B Alpha
Model CreatorHugging Face H4
Base Model(s)  Zephyr 7B Alpha   HuggingFaceH4/zephyr-7b-alpha
Model Size7b
Required VRAM4.2 GB
Updated2024-12-22
MaintainerTheBloke
Model Typemistral
Model Files  4.2 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureMistralForCausalLM
Licensemit
Context Length32768
Model Max Length32768
Transformers Version4.34.0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typefloat16

Quantized Models of the Zephyr 7B Alpha GPTQ

Model
Likes
Downloads
VRAM
LMS Chatbot050 GB
Zephyr 7B Alpha GPTQ120 GB
Thesa100 GB
Zephyr Support Chatbot100 GB
Zephyr Support Chatbot100 GB

Best Alternatives to Zephyr 7B Alpha GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
Mistral 7B Instruct V0.2 GPTQ32K / 4.2 GB57367350
Mistral 7B Instruct V0.3 GPTQ32K / 4.2 GB152890
Mistral 7B Instruct V0.1 GPTQ32K / 4.2 GB16415178
...ral 7B Instruct V0.3 GPTQ 4bit32K / 4.2 GB147316
...ephyr 7B Beta Channelwise Gptq32K / 4 GB113890
...hyr 7B Beta Channelwise Marlin32K / 4 GB56110
Zephyr 7B Beta Marlin32K / 4.1 GB56730
...baraHermes 2.5 Mistral 7B GPTQ32K / 4.2 GB245955
...istral 7B Pruned50 GPTQ Marlin32K / 4 GB50
Mistral 7B Unsloth Gptq 8bit32K / 7.7 GB60
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/zephyr-7B-alpha-GPTQ.

Rank the Zephyr 7B Alpha GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40123 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217