Phoenix GPTQ by DRXD1000

 ยป  All LLMs  ยป  DRXD1000  ยป  Phoenix GPTQ   URL Share it on

  Arxiv:2401.10580   4-bit   4bit   Alignment-handbook   Autotrain compatible   Conversational   De   Dpo   Endpoints compatible   Gptq   Mistral   Quantization   Quantized   Region:us   Safetensors
Model Card on HF ๐Ÿค—: https://huggingface.co/DRXD1000/Phoenix-GPTQ 

Phoenix GPTQ Benchmarks

nn.n% โ€” How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Sponsored by Nebius

Phoenix GPTQ Parameters and Internals

Model Type 
GPT-like 7B model DPO fine-tuned
Additional Notes 
Model trained in German with GPTQ quantization.
Supported Languages 
German (NLP)
Training Details 
Data Sources:
HuggingFaceH4/ultrachat_200k, argilla/ultrafeedback-binarized-preferences
Hardware Used:
8 x A100 80GB
Input Output 
Input Format:
<|system|> ~~ <|user|> {prompt}~~ <|assistant|>
Accepted Modalities:
text
LLM NamePhoenix GPTQ
Repository ๐Ÿค—https://huggingface.co/DRXD1000/Phoenix-GPTQ 
Model Size1.2b
Required VRAM4.2โ€ฏGB
Updated2025-03-12
MaintainerDRXD1000
Model Typemistral
Model Files  4.2 GB
Supported Languagesde
GPTQ QuantizationYes
Quantization Typegptq|4bit
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Phoenix GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
... Finetune 16bit Ver9 Main GPTQ32K / 4.2โ€‰GB120
Dictalm2.0 Instruct GPTQ32K / 4.2โ€‰GB990
Dictalm2.0 GPTQ32K / 4.2โ€‰GB340
Multi Verse Model GPTQ32K / 4.2โ€‰GB321
Turdus GPTQ32K / 4.2โ€‰GB795
Garrulus GPTQ32K / 4.2โ€‰GB283
HamSter 0.1 GPTQ32K / 4.2โ€‰GB382
Mistral Ft Optimized 1227 GPTQ32K / 4.2โ€‰GB372
Metis 0.5 GPTQ32K / 4.2โ€‰GB241
...hat 3.5 1210 Seraph Slerp GPTQ32K / 4.2โ€‰GB92
Note: green Score (e.g. "73.2") means that the model is better than DRXD1000/Phoenix-GPTQ.

Rank the Phoenix GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
โ—โ—โ—โ—
Factuality and Completeness of Knowledge  
โ—โ—โ—โ—
Censorship and Alignment  
โ—โ—โ—โ—
Data Analysis and Insight Generation  
โ—โ—โ—โ—
Text Generation  
โ—โ—โ—โ—
Text Summarization and Feature Extraction  
โ—โ—โ—โ—
Code Generation  
โ—โ—โ—โ—
Multi-Language Support and Translation  
โ—โ—โ—โ—

What open-source LLMs or SLMs are you in search of? 44949 in total.

Our Social Media โ†’  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227