Phoenix GPTQ by DRXD1000

 ยป  All LLMs  ยป  DRXD1000  ยป  Phoenix GPTQ   URL Share it on

  Arxiv:2401.10580   4-bit   4bit   Alignment-handbook   Autotrain compatible   Conversational   De   Dpo   Endpoints compatible   Gptq   Mistral   Quantization   Quantized   Region:us   Safetensors
Model Card on HF ๐Ÿค—: https://huggingface.co/DRXD1000/Phoenix-GPTQ 

Phoenix GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Phoenix GPTQ (DRXD1000/Phoenix-GPTQ)

Phoenix GPTQ Parameters and Internals

Model Type 
GPT-like 7B model DPO fine-tuned
Additional Notes 
Model trained in German with GPTQ quantization.
Supported Languages 
German (NLP)
Training Details 
Data Sources:
HuggingFaceH4/ultrachat_200k, argilla/ultrafeedback-binarized-preferences
Hardware Used:
8 x A100 80GB
Input Output 
Input Format:
<|system|> ~~ <|user|> {prompt}~~ <|assistant|>
Accepted Modalities:
text
LLM NamePhoenix GPTQ
Repository ๐Ÿค—https://huggingface.co/DRXD1000/Phoenix-GPTQ 
Model Size1.2b
Required VRAM4.2 GB
Updated2025-02-05
MaintainerDRXD1000
Model Typemistral
Model Files  4.2 GB
Supported Languagesde
GPTQ QuantizationYes
Quantization Typegptq|4bit
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Phoenix GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
... Finetune 16bit Ver9 Main GPTQ32K / 4.2 GB760
Dictalm2.0 Instruct GPTQ32K / 4.2 GB1580
Dictalm2.0 GPTQ32K / 4.2 GB1290
Multi Verse Model GPTQ32K / 4.2 GB1311
Turdus GPTQ32K / 4.2 GB215
Garrulus GPTQ32K / 4.2 GB183
HamSter 0.1 GPTQ32K / 4.2 GB162
Mistral Ft Optimized 1227 GPTQ32K / 4.2 GB202
...h Openchat 3.5 1210 Slerp GPTQ32K / 4.2 GB171
...hat 3.5 1210 Seraph Slerp GPTQ32K / 4.2 GB72
Note: green Score (e.g. "73.2") means that the model is better than DRXD1000/Phoenix-GPTQ.

Rank the Phoenix GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227