Airoboros 110B 3.3 by jondurbin

 ยป  All LLMs  ยป  jondurbin  ยป  Airoboros 110B 3.3   URL Share it on

  Autotrain compatible   Conversational Dataset:bluemoon-fandom-1-1-rp...   Dataset:boolq Dataset:glaiveai/glaive-functi... Dataset:grimulkan/limarp-augme... Dataset:jondurbin/airoboros-3.... Dataset:jondurbin/cinematika-v... Dataset:jondurbin/gutenberg-dp...   Dataset:ldjnr/capybara Dataset:mattpscott/airoboros-s...   Dataset:piqa Dataset:unalignment/toxic-dpo-... Dataset:vezora/tested-22k-pyth...   Endpoints compatible   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

Airoboros 110B 3.3 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Airoboros 110B 3.3 (jondurbin/airoboros-110b-3.3)

Airoboros 110B 3.3 Parameters and Internals

Model Type 
Fine-tuned Model
Additional Notes 
Model shows occasional addition of random extra tokens in responses. Specific formatting and low temperature settings are recommended for closed-context instructions.
Training Details 
Data Sources:
jondurbin/airoboros-3.2, bluemoon-fandom-1-1-rp-cleaned, boolq, jondurbin/gutenberg-dpo-v0.1, LDJnr/Capybara, jondurbin/cinematika-v0.1, glaiveai/glaive-function-calling-v2, grimulkan/LimaRP-augmented, piqa, Vezora/Tested-22k-Python-Alpaca, mattpscott/airoboros-summarization, unalignment/toxic-dpo-v0.2
Methodology:
Fine-tuned on various datasets including synthetic data generated by airoboros
Model Architecture:
Based on ChatML prompt formatting.
Input Output 
Input Format:
ChatML prompt formatting.
Accepted Modalities:
text
Performance Tips:
Use prompt markers for context obedience and avoid hallucinations.
LLM NameAiroboros 110B 3.3
Repository ๐Ÿค—https://huggingface.co/jondurbin/airoboros-110b-3.3 
Model Size110b
Required VRAM158.3 GB
Updated2024-12-26
Maintainerjondurbin
Model Typeqwen2
Model Files  3.6 GB: 1-of-62   3.5 GB: 2-of-62   3.8 GB: 3-of-62   3.5 GB: 4-of-62   3.5 GB: 5-of-62   3.8 GB: 6-of-62   3.5 GB: 7-of-62   3.5 GB: 8-of-62   3.8 GB: 9-of-62   3.5 GB: 10-of-62   3.5 GB: 11-of-62   3.8 GB: 12-of-62   3.5 GB: 13-of-62   3.5 GB: 14-of-62   3.8 GB: 15-of-62   3.5 GB: 16-of-62   3.5 GB: 17-of-62   3.8 GB: 18-of-62   3.5 GB: 19-of-62   3.5 GB: 20-of-62   3.8 GB: 21-of-62   3.5 GB: 22-of-62   3.5 GB: 23-of-62   3.8 GB: 24-of-62   3.5 GB: 25-of-62   3.5 GB: 26-of-62   3.8 GB: 27-of-62   3.5 GB: 28-of-62   3.5 GB: 29-of-62   3.8 GB: 30-of-62   3.5 GB: 31-of-62   3.5 GB: 32-of-62   3.8 GB: 33-of-62   3.5 GB: 34-of-62   3.5 GB: 35-of-62   3.8 GB: 36-of-62   3.5 GB: 37-of-62   3.5 GB: 38-of-62   3.8 GB: 39-of-62   3.5 GB: 40-of-62   3.5 GB: 41-of-62   3.8 GB: 42-of-62   3.5 GB: 43-of-62   3.5 GB: 44-of-62
Model ArchitectureQwen2ForCausalLM
Licenseother
Context Length32768
Model Max Length32768
Transformers Version4.40.1
Vocabulary Size152064
Torch Data Typebfloat16

Best Alternatives to Airoboros 110B 3.3

Best Alternatives
Context / RAM
Downloads
Likes
Qwen1.5 110B Chat32K / 158.3 GB6356123
Qwen1.5 110B32K / 221.7 GB311193
Dolphin 2.9.1 Qwen 110B32K / 193.4 GB4626
Airoboros DPO 110B 3.332K / 158.3 GB270
Aqua Qwen 0.1 110B32K / 198.3 GB170
...wen 1.5 110B Layer Mix Bpw 2.28K / 40.7 GB171
Qwen1.5 110B Chat 4bit32K / 62.2 GB115
Qwen1.5 110B Chat 8bit32K / 179.8 GB141
...n1.5 110B Chat 3.35bpw H6 EXL232K / 49 GB121
...n1.5 110B Chat 3.25bpw H6 EXL232K / 47.7 GB111
Note: green Score (e.g. "73.2") means that the model is better than jondurbin/airoboros-110b-3.3.

Rank the Airoboros 110B 3.3 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40303 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227