Synatra 7B V0.3 DPO GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Synatra 7B V0.3 DPO GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:maywell/synatra-7b-... Base model:quantized:maywell/s...   Conversational   Gptq   Mistral   Quantized   Region:us   Safetensors

Synatra 7B V0.3 DPO GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Synatra 7B V0.3 DPO GPTQ (TheBloke/Synatra-7B-v0.3-dpo-GPTQ)

Synatra 7B V0.3 DPO GPTQ Parameters and Internals

Model Type 
mistral
Additional Notes 
Various quantization parameters are available to optimize for hardware efficiency.
Training Details 
Data Sources:
A100 80GB * 1
Methodology:
ChatML format and Alpaca(No-Input) format
Context Length:
4096
Input Output 
Input Format:
ChatML format
LLM NameSynatra 7B V0.3 DPO GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/Synatra-7B-v0.3-dpo-GPTQ 
Model NameSynatra 7B V0.3 dpo
Model CreatorJeonghwan Park
Base Model(s)  maywell/Synatra-7B-v0.3-dpo   maywell/Synatra-7B-v0.3-dpo
Model Size7b
Required VRAM4.2 GB
Updated2025-02-05
MaintainerTheBloke
Model Typemistral
Model Files  4.2 GB
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureMistralForCausalLM
Licensecc-by-sa-4.0
Context Length32768
Model Max Length32768
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typebfloat16

Best Alternatives to Synatra 7B V0.3 DPO GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
Mistral 7B Instruct V0.2 GPTQ32K / 4.2 GB39163850
Mistral 7B Instruct V0.3 GPTQ32K / 4.2 GB86380
...ral 7B Instruct V0.3 GPTQ 4bit32K / 4.2 GB189618
...ephyr 7B Beta Channelwise Gptq32K / 4 GB99220
NeuralBeagle14 7B GPTQ32K / 4.2 GB169205
...baraHermes 2.5 Mistral 7B GPTQ32K / 4.2 GB370856
...istral 7B Pruned50 GPTQ Marlin32K / 4 GB760
...phyr 7B Beta Assistant V1 Gptq32K / 4.2 GB791
...l Neural Chat 7B V3.8 Bit Gptq32K / 7.7 GB770
...lai Mistral 7B V0.1 4 Bit Gptq32K / 4.2 GB790
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Synatra-7B-v0.3-dpo-GPTQ.

Rank the Synatra 7B V0.3 DPO GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227