Bagel 8B V1.0 8.0bpw H8 EXL2 by LoneStriker

 ยป  All LLMs  ยป  LoneStriker  ยป  Bagel 8B V1.0 8.0bpw H8 EXL2   URL Share it on

  8-bit   Autotrain compatible   Bagel Base model:meta-llama/meta-lla... Base model:quantized:meta-llam...   Conversational   Dataset:ai2 arc Dataset:allenai/ultrafeedback ... Dataset:argilla/distilabel-int... Dataset:b-mc2/sql-create-conte... Dataset:bluemoon-fandom-1-1-rp...   Dataset:boolq   Dataset:cakiki/rosetta-code   Dataset:camel-ai/biology   Dataset:camel-ai/chemistry   Dataset:camel-ai/math   Dataset:camel-ai/physics   Dataset:codeparrot/apps   Dataset:facebook/belebele Dataset:glaiveai/glaive-functi... Dataset:grimulkan/limarp-augme... Dataset:jondurbin/airoboros-3.... Dataset:jondurbin/cinematika-v... Dataset:jondurbin/contextual-d... Dataset:jondurbin/gutenberg-dp...   Dataset:jondurbin/py-dpo-v0.1 Dataset:jondurbin/truthy-dpo-v...   Dataset:kingbri/pippa-sharegpt   Dataset:ldjnr/capybara   Dataset:lmsys/lmsys-chat-1m Dataset:mattpscott/airoboros-s... Dataset:migtissera/synthia-v1.... Dataset:muennighoff/natural-in...   Dataset:open-orca/slimorca   Dataset:openbookqa Dataset:parisneo/lollms aware ...   Dataset:piqa   Dataset:ropes   Dataset:squad v2   Dataset:tiger-lab/mathinstruct Dataset:unalignment/toxic-dpo-... Dataset:vezora/tested-22k-pyth... Dataset:whiterabbitneo/wrn-cha... Dataset:whiterabbitneo/wrn-cha...   Dataset:winogrande Dataset:wizardlm/wizardlm evol...   Endpoints compatible   Exl2   Instruct   Llama   Llama-3   Quantized   Region:us   Safetensors

Bagel 8B V1.0 8.0bpw H8 EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Bagel 8B V1.0 8.0bpw H8 EXL2 (LoneStriker/bagel-8b-v1.0-8.0bpw-h8-exl2)

Bagel 8B V1.0 8.0bpw H8 EXL2 Parameters and Internals

Model Type 
text generation
Additional Notes 
The model focuses heavily on refining instruction-response dynamics by emphasizing context-based learning and reducing hallucinations.
Training Details 
Data Sources:
ai2_arc, allenai/ultrafeedback_binarized_cleaned, argilla/distilabel-intel-orca-dpo-pairs, jondurbin/airoboros-3.2, codeparrot/apps, facebook/belebele, bluemoon-fandom-1-1-rp-cleaned, boolq, camel-ai/biology, camel-ai/chemistry, camel-ai/math, camel-ai/physics, jondurbin/contextual-dpo-v0.1, jondurbin/gutenberg-dpo-v0.1, jondurbin/py-dpo-v0.1, jondurbin/truthy-dpo-v0.1, LDJnr/Capybara, jondurbin/cinematika-v0.1, WizardLM/WizardLM_evol_instruct_70k, glaiveai/glaive-function-calling-v2, grimulkan/LimaRP-augmented, lmsys/lmsys-chat-1m, ParisNeo/lollms_aware_dataset, TIGER-Lab/MathInstruct, Muennighoff/natural-instructions, openbookqa, kingbri/PIPPA-shareGPT, piqa, Vezora/Tested-22k-Python-Alpaca, ropes, cakiki/rosetta-code, Open-Orca/SlimOrca, b-mc2/sql-create-context, squad_v2, mattpscott/airoboros-summarization, migtissera/Synthia-v1.3, unalignment/toxic-dpo-v0.2, WhiteRabbitNeo/WRN-Chapter-1, WhiteRabbitNeo/WRN-Chapter-2, winogrande
Methodology:
Decontamination by cosine similarity as a sanity check against common benchmarks.
Model Architecture:
Llama-3 architecture with specific fine-tuning parameters
Input Output 
Input Format:
llama-3-instruct prompt template
Accepted Modalities:
text
Output Format:
textual response
Performance Tips:
Use a very low temperature to ensure deterministic responses when needed.
LLM NameBagel 8B V1.0 8.0bpw H8 EXL2
Repository ๐Ÿค—https://huggingface.co/LoneStriker/bagel-8b-v1.0-8.0bpw-h8-exl2 
Base Model(s)  Meta Llama 3 8B   meta-llama/Meta-Llama-3-8B
Model Size8b
Required VRAM8.5 GB
Updated2025-02-22
MaintainerLoneStriker
Model Typellama
Instruction-BasedYes
Model Files  8.5 GB
Quantization Typeexl2
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length8192
Model Max Length8192
Transformers Version4.41.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Bagel 8B V1.0 8.0bpw H8 EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...B Instruct Gradient 1048K 4bit1024K / 4.5 GB212
...B Instruct Gradient 1048K 8bit1024K / 8.6 GB71
...truct Gradient 1048K Bpw6 EXL21024K / 6.7 GB102
...truct Gradient 1048K Bpw5 EXL21024K / 5.8 GB70
Llama 3 8B Instruct 1048K 4bit1024K / 4.5 GB1225
Llama 3 8B Instruct 1048K 8bit1024K / 8.6 GB2817
... Gradient 1048K 8.0bpw H8 EXL21024K / 8.6 GB83
...ct Gradient 1048K Bpw2.25 EXL21024K / 3.4 GB51
Llama 3 8B Instruct 262K 2bit256K / 2.5 GB71
...B Instruct 262k V2 EXL2 6.0bpw256K / 6.7 GB111
Note: green Score (e.g. "73.2") means that the model is better than LoneStriker/bagel-8b-v1.0-8.0bpw-h8-exl2.

Rank the Bagel 8B V1.0 8.0bpw H8 EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227