Tulu 2 DPO 70B GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Tulu 2 DPO 70B GGUF   URL Share it on

  Arxiv:2305.18290   Arxiv:2311.10702 Base model:allenai/tulu-2-dpo-... Base model:quantized:allenai/t... Dataset:allenai/tulu-v2-sft-mi... Dataset:huggingfaceh4/ultrafee...   En   Gguf   Llama   Quantized   Region:us

Tulu 2 DPO 70B GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tulu 2 DPO 70B GGUF (TheBloke/tulu-2-dpo-70B-GGUF)

Tulu 2 DPO 70B GGUF Parameters and Internals

Model Type 
llama
Use Cases 
Areas:
research, commercial applications
Applications:
instruction tuning, chat applications
Limitations:
not aligned for safe completions, potential for problematic outputs
Considerations:
Model may produce unsafe or biased outputs.
Additional Notes 
The model uses the GGUF format, succeeding GGML, for improved compatibility and performance in various UIs and libraries. Compatibility with llama.cpp and multiple third-party tools enhances usability across platforms.
Supported Languages 
en (Primary)
Training Details 
Data Sources:
publicly available datasets, synthetic datasets, human datasets, filtered Tulu V2 mix dataset, openbmb/UltraFeedback
Data Volume:
64k prompts in UltraFeedback dataset
Methodology:
Direct Preference Optimization (DPO)
Model Architecture:
Fine-tuned Llama 2
Input Output 
Input Format:
<|user|> {prompt} <|assistant|>
Accepted Modalities:
text
Output Format:
Text response modeling after assistant
Performance Tips:
Include a newline after <|assistant|> for optimal generation.
LLM NameTulu 2 DPO 70B GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/tulu-2-dpo-70B-GGUF 
Model NameTulu 2 DPO 70B
Model CreatorAllen Institute for AI
Base Model(s)  Tulu 2 DPO 70B   allenai/tulu-2-dpo-70b
Model Size70b
Required VRAM29.3 GB
Updated2025-02-05
MaintainerTheBloke
Model Typellama
Model Files  29.3 GB   36.1 GB   33.2 GB   29.9 GB   38.9 GB   41.4 GB   39.1 GB   47.5 GB   48.8 GB   47.5 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licenseother

Best Alternatives to Tulu 2 DPO 70B GGUF

Best Alternatives
Context / RAM
Downloads
Likes
CodeLlama 70B Instruct GGUF0K / 25.5 GB251056
...gekit Passthrough Yqhuxcv GGUF0K / 16.9 GB80
CodeLlama 70B Python GGUF0K / 25.5 GB158241
Meta Llama 3 70B Instruct GGUF0K / 26.4 GB1383
KafkaLM 70B German V0.1 GGUF0K / 25.5 GB95027
CodeLlama 70B Hf GGUF0K / 25.5 GB79543
DAD Model V2 70B Q40K / 42.5 GB50
Llama 2 70B Guanaco QLoRA GGUF0K / 29.3 GB160
Llama 2 70B Chat GGUF0K / 29.3 GB6858122
Meditron 70B GGUF0K / 29.3 GB46119
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/tulu-2-dpo-70B-GGUF.

Rank the Tulu 2 DPO 70B GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42565 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227