Bagel 7B V0.1 by jondurbin

 ยป  All LLMs  ยป  jondurbin  ยป  Bagel 7B V0.1   URL Share it on

  Autotrain compatible   Conversational   Dataset:ai2 arc   Dataset:boolq   Dataset:cais/mmlu   Dataset:cakiki/rosetta-code   Dataset:codeparrot/apps   Dataset:datasets/winogrande   Dataset:drop   Dataset:facebook/belebele Dataset:jondurbin/cinematika-v...   Dataset:lmsys/lmsys-chat-1m Dataset:migtissera/synthia-v1.... Dataset:muennighoff/natural-in...   Dataset:open-orca/slimorca   Dataset:openbookqa   Dataset:piqa   Dataset:spider   Dataset:squad v2   Dataset:tiger-lab/mathinstruct   Dataset:unalignment/spicy-3.1 Dataset:vezora/tested-22k-pyth...   Endpoints compatible   Mistral   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/jondurbin/bagel-7b-v0.1 

Bagel 7B V0.1 Benchmarks

Bagel 7B V0.1 (jondurbin/bagel-7b-v0.1)

Bagel 7B V0.1 Parameters and Internals

Use Cases 
Areas:
Roleplaying scenarios, AI instruction-following
Applications:
Conversational agents, Reading comprehension, Code generation
Primary Use Cases:
Scenarios requiring less 'truthful' AI responses
Additional Notes 
This version of the model is trained without DPO, making it potentially less 'truthful,' which might be beneficial in particular scenarios.
Training Details 
Data Sources:
ai2_arc, unalignment/spicy-3.1, codeparrot/apps, facebook/belebele, boolq, jondurbin/cinematika-v0.1, drop, lmsys/lmsys-chat-1m, TIGER-Lab/MathInstruct, cais/mmlu, Muennighoff/natural-instructions, openbookqa, piqa, Vezora/Tested-22k-Python-Alpaca, cakiki/rosetta-code, Open-Orca/SlimOrca, spider, squad_v2, migtissera/Synthia-v1.3, datasets/winogrande
Methodology:
Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO)
Context Length:
4096
Hardware Used:
Deepspeed configuration with stage 2 zero optimization
Input Output 
Input Format:
Multiple prompt formats including Vicuna, Alpaca (sort of), ChatML (sort of), and Llama-2 chat.
Accepted Modalities:
text
Performance Tips:
For fine-tuning, it's recommended to do 1 epoch due to multiple prompt formats increasing effective epoch count.
LLM NameBagel 7B V0.1
Repository ๐Ÿค—https://huggingface.co/jondurbin/bagel-7b-v0.1 
Model Size7b
Required VRAM14.4 GB
Updated2024-12-22
Maintainerjondurbin
Model Typemistral
Model Files  4.9 GB: 1-of-3   5.0 GB: 2-of-3   4.5 GB: 3-of-3
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Torch Data Typebfloat16

Quantized Models of the Bagel 7B V0.1

Model
Likes
Downloads
VRAM
Bagel 7B V0.1 GGUF21533 GB
Bagel 7B V0.1 GPTQ0184 GB
Bagel 7B V0.1 AWQ0174 GB

Best Alternatives to Bagel 7B V0.1

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB26689
MegaBeam Mistral 7B 512K512K / 14.4 GB711346
SpydazWeb AI HumanAI RP512K / 14.4 GB261
SpydazWeb AI HumanAI 002512K / 14.4 GB191
...daz Web AI ChatML 512K Project512K / 14.5 GB120
MegaBeam Mistral 7B 300K282K / 14.4 GB328215
Hebrew Mistral 7B 200K256K / 30 GB319315
Astral 256K 7B V2250K / 14.4 GB190
Astral 256K 7B250K / 14.4 GB160
Boptruth Agatha 7B128K / 14.4 GB4750
Note: green Score (e.g. "73.2") means that the model is better than jondurbin/bagel-7b-v0.1.

Rank the Bagel 7B V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217