Bagel 34B V0.2 by jondurbin

 ยป  All LLMs  ยป  jondurbin  ยป  Bagel 34B V0.2   URL Share it on

  Autotrain compatible   Conversational   Dataset:ai2 arc Dataset:allenai/ultrafeedback ...   Dataset:boolq   Dataset:cais/mmlu   Dataset:cakiki/rosetta-code   Dataset:codeparrot/apps   Dataset:datasets/winogrande   Dataset:drop   Dataset:facebook/belebele   Dataset:intel/orca dpo pairs Dataset:jondurbin/cinematika-v... Dataset:jondurbin/truthy-dpo-v...   Dataset:julielab/emobank   Dataset:kingbri/pippa-sharegpt   Dataset:ldjnr/capybara   Dataset:lmsys/lmsys-chat-1m Dataset:migtissera/synthia-v1.... Dataset:muennighoff/natural-in...   Dataset:nvidia/helpsteer   Dataset:open-orca/slimorca   Dataset:openbookqa   Dataset:piqa   Dataset:spider   Dataset:squad v2 Dataset:squish42/bluemoon-fand...   Dataset:tiger-lab/mathinstruct   Dataset:unalignment/spicy-3.1 Dataset:unalignment/toxic-dpo-... Dataset:vezora/tested-22k-pyth...   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow

Bagel 34B V0.2 Benchmarks

Bagel 34B V0.2 (jondurbin/bagel-34b-v0.2)

Bagel 34B V0.2 Parameters and Internals

Model Type 
Text Generation
Training Details 
Data Sources:
ai2_arc, unalignment/spicy-3.1, codeparrot/apps, facebook/belebele, boolq, jondurbin/cinematika-v0.1, drop, lmsys/lmsys-chat-1m, TIGER-Lab/MathInstruct, cais/mmlu, Muennighoff/natural-instructions, openbookqa, piqa, Vezora/Tested-22k-Python-Alpaca, cakiki/rosetta-code, Open-Orca/SlimOrca, spider, squad_v2, migtissera/Synthia-v1.3, datasets/winogrande, nvidia/HelpSteer, Intel/orca_dpo_pairs, unalignment/toxic-dpo-v0.1, jondurbin/truthy-dpo-v0.1, allenai/ultrafeedback_binarized_cleaned, Squish42/bluemoon-fandom-1-1-rp-cleaned, LDJnr/Capybara, JULIELab/EmoBank, kingbri/PIPPA-shareGPT
Methodology:
SFT phase using a variety of prompt formats: vicuna, llama-2, alpaca, and chat-ml (sorta).
Model Architecture:
unknown
Input Output 
Input Format:
vicuna, llama-2, alpaca, chat-ml
Accepted Modalities:
text
Output Format:
text
Performance Tips:
Recommended to run for 1 epoch or 0.75 epoch, using a relatively low learning rate.
LLM NameBagel 34B V0.2
Repository ๐Ÿค—https://huggingface.co/jondurbin/bagel-34b-v0.2 
Model Size34b
Required VRAM68.7 GB
Updated2025-03-12
Maintainerjondurbin
Model Typellama
Model Files  4.0 GB: 1-of-18   3.9 GB: 2-of-18   3.9 GB: 3-of-18   3.9 GB: 4-of-18   3.9 GB: 5-of-18   3.9 GB: 6-of-18   3.9 GB: 7-of-18   3.9 GB: 8-of-18   3.9 GB: 9-of-18   3.9 GB: 10-of-18   3.9 GB: 11-of-18   3.9 GB: 12-of-18   3.9 GB: 13-of-18   3.9 GB: 14-of-18   3.9 GB: 15-of-18   3.9 GB: 16-of-18   3.9 GB: 17-of-18   2.3 GB: 18-of-18
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length200000
Model Max Length200000
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size64000
Torch Data Typebfloat16

Best Alternatives to Bagel 34B V0.2

Best Alternatives
Context / RAM
Downloads
Likes
Casual Magnum 34B195K / 68.8 GB151
Yi 34B 200K195K / 68.9 GB6867318
34B Beta195K / 69.2 GB192063
Bagel Hermes 34B Slerp195K / 68.9 GB21701
Smaug 34B V0.1195K / 69.2 GB191060
Yi 34B 200K AEZAKMI V2195K / 69.2 GB207112
Mergekit Slerp Anaazls195K / 69.2 GB70
Faro Yi 34B195K / 69.2 GB19566
Dolphin 2.2 Yi 34B 200K195K / 69.2 GB205936
Deepmoney 34B 200K Base195K / 68.9 GB184969
Note: green Score (e.g. "73.2") means that the model is better than jondurbin/bagel-34b-v0.2.

Rank the Bagel 34B V0.2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 44887 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227