Tulu 2 DPO 70B AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Tulu 2 DPO 70B AWQ   URL Share it on

  Arxiv:2305.18290   Arxiv:2311.10702   4-bit   Autotrain compatible   Awq Base model:allenai/tulu-2-dpo-... Base model:quantized:allenai/t... Dataset:allenai/tulu-v2-sft-mi... Dataset:huggingfaceh4/ultrafee...   En   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Tulu 2 DPO 70B AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tulu 2 DPO 70B AWQ (TheBloke/tulu-2-dpo-70B-AWQ)

Tulu 2 DPO 70B AWQ Parameters and Internals

Model Type 
llama
Use Cases 
Areas:
research, commercial applications
Limitations:
can produce problematic outputs
Supported Languages 
en (primary)
Training Details 
Data Sources:
publicly available, synthetic, and human datasets, Tulu V2 mix dataset, openbmb/UltraFeedback dataset
Methodology:
Direct Preference Optimization (DPO)
Context Length:
8192
Hardware Used:
Massed Compute hardware
Model Architecture:
instruction and RLHF tuned chat models
Input Output 
Input Format:
<|user|> {prompt} <|assistant|>
Accepted Modalities:
text
Output Format:
text
Performance Tips:
Ensure a newline after <|assistant|> for best results
LLM NameTulu 2 DPO 70B AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/tulu-2-dpo-70B-AWQ 
Model NameTulu 2 DPO 70B
Model CreatorAllen Institute for AI
Base Model(s)  Tulu 2 DPO 70B   allenai/tulu-2-dpo-70b
Model Size70b
Required VRAM36.6 GB
Updated2025-02-05
MaintainerTheBloke
Model Typellama
Model Files  9.9 GB: 1-of-4   9.9 GB: 2-of-4   9.9 GB: 3-of-4   6.9 GB: 4-of-4
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length8192
Model Max Length8192
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Tulu 2 DPO 70B AWQ

Best Alternatives
Context / RAM
Downloads
Likes
...0B Instruct Gradient 1048K AWQ1024K / 39.9 GB41
...70B Instruct Gradient 262K AWQ256K / 39.9 GB50
Llama 3.3 70B Instruct AWQ128K / 39.9 GB4472227
...lama 3.3 70B Instruct AWQ INT4128K / 39.9 GB2450316
... SauerkrautLM 70B Instruct AWQ128K / 39.9 GB1074
MultiVerse 70B AWQ32K / 41.3 GB802
Opus V1.2 70B AWQ32K / 36.7 GB251
QuartetAnemoi 70B T0.0001 AWQ31K / 36.7 GB51
Senku 70B AWQ 4bit GEMM31K / 36.7 GB91
Kiqu 70B AWQ31K / 36.7 GB181
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/tulu-2-dpo-70B-AWQ.

Rank the Tulu 2 DPO 70B AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42565 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227