Bagel Jamba V05 by jondurbin

 ยป  All LLMs  ยป  jondurbin  ยป  Bagel Jamba V05   URL Share it on

  Autotrain compatible   Base model:ai21labs/jamba-v0.1 Base model:finetune:ai21labs/j...   Conversational   Custom code   Dataset:ai2 arc Dataset:allenai/ultrafeedback ... Dataset:argilla/distilabel-int... Dataset:b-mc2/sql-create-conte... Dataset:bluemoon-fandom-1-1-rp...   Dataset:boolq   Dataset:cakiki/rosetta-code   Dataset:camel-ai/biology   Dataset:camel-ai/chemistry   Dataset:camel-ai/math   Dataset:camel-ai/physics   Dataset:codeparrot/apps   Dataset:facebook/belebele Dataset:glaiveai/glaive-functi... Dataset:grimulkan/limarp-augme... Dataset:jondurbin/airoboros-3.... Dataset:jondurbin/cinematika-v... Dataset:jondurbin/contextual-d... Dataset:jondurbin/gutenberg-dp...   Dataset:jondurbin/py-dpo-v0.1 Dataset:jondurbin/truthy-dpo-v...   Dataset:kingbri/pippa-sharegpt   Dataset:ldjnr/capybara   Dataset:lmsys/lmsys-chat-1m Dataset:mattpscott/airoboros-s... Dataset:migtissera/synthia-v1.... Dataset:muennighoff/natural-in...   Dataset:open-orca/slimorca   Dataset:openbookqa Dataset:parisneo/lollms aware ...   Dataset:piqa   Dataset:ropes   Dataset:squad v2   Dataset:tiger-lab/mathinstruct Dataset:unalignment/toxic-dpo-... Dataset:vezora/tested-22k-pyth... Dataset:whiterabbitneo/wrn-cha... Dataset:whiterabbitneo/wrn-cha...   Dataset:winogrande Dataset:wizardlm/wizardlm evol...   Endpoints compatible   Instruct   Jamba   Region:us   Safetensors   Sharded   Tensorflow

Bagel Jamba V05 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Bagel Jamba V05 (jondurbin/bagel-jamba-v05)

Bagel Jamba V05 Parameters and Internals

Model Type 
text generation, instruction-following
Use Cases 
Areas:
Instruction-following, Text generation
Applications:
Research, General AI assistance
Primary Use Cases:
Instruction completion, Creative writing
Limitations:
Model might include humorous or creative content not fitting for all professional applications.
Considerations:
Suitable for experimental and creative use cases.
Additional Notes 
Model fine-tuned on a wide variety of datasets to improve instruction-following and creativity.
Training Details 
Data Sources:
ai2_arc, allenai/ultrafeedback_binarized_cleaned, argilla/distilabel-intel-orca-dpo-pairs, jondurbin/airoboros-3.2, codeparrot/apps, facebook/belebele, bluemoon-fandom-1-1-rp-cleaned, boolq, camel-ai/biology, camel-ai/chemistry, camel-ai/math, camel-ai/physics, jondurbin/contextual-dpo-v0.1, jondurbin/gutenberg-dpo-v0.1, jondurbin/py-dpo-v0.1, jondurbin/truthy-dpo-v0.1, LDJnr/Capybara, jondurbin/cinematika-v0.1, WizardLM/WizardLM_evol_instruct_70k, glaiveai/glaive-function-calling-v2, grimulkan/LimaRP-augmented, lmsys/lmsys-chat-1m, ParisNeo/lollms_aware_dataset, TIGER-Lab/MathInstruct, Muennighoff/natural-instructions, openbookqa, kingbri/PIPPA-shareGPT, piqa, Vezora/Tested-22k-Python-Alpaca, ropes, cakiki/rosetta-code, Open-Orca/SlimOrca, b-mc2/sql-create-context, squad_v2, mattpscott/airoboros-summarization, migtissera/Synthia-v1.3, unalignment/toxic-dpo-v0.2, WhiteRabbitNeo/WRN-Chapter-1, WhiteRabbitNeo/WRN-Chapter-2, winogrande
Methodology:
Experimental fine-tuning with adjustments to SFT phase, decontamination by cosine similarity.
Model Architecture:
Fine-tuned variation of Jamba-v0.1 on the 'bagel' dataset.
LLM NameBagel Jamba V05
Repository ๐Ÿค—https://huggingface.co/jondurbin/bagel-jamba-v05 
Base Model(s)  ai21labs/Jamba-v0.1   ai21labs/Jamba-v0.1
Model Size51.6b
Required VRAM103.6 GB
Updated2024-12-21
Maintainerjondurbin
Model Typejamba
Instruction-BasedYes
Model Files  3.9 GB: 1-of-27   4.0 GB: 2-of-27   4.0 GB: 3-of-27   3.9 GB: 4-of-27   3.9 GB: 5-of-27   3.9 GB: 6-of-27   4.0 GB: 7-of-27   4.0 GB: 8-of-27   4.0 GB: 9-of-27   4.0 GB: 10-of-27   3.9 GB: 11-of-27   4.0 GB: 12-of-27   4.0 GB: 13-of-27   4.0 GB: 14-of-27   4.0 GB: 15-of-27   4.0 GB: 16-of-27   3.9 GB: 17-of-27   4.0 GB: 18-of-27   4.0 GB: 19-of-27   4.0 GB: 20-of-27   4.0 GB: 21-of-27   4.0 GB: 22-of-27   3.9 GB: 23-of-27   4.0 GB: 24-of-27   4.0 GB: 25-of-27   3.8 GB: 26-of-27   0.5 GB: 27-of-27
Model ArchitectureJambaForCausalLM
Licenseapache-2.0
Transformers Version4.39.3
Tokenizer ClassLlamaTokenizer
Padding Token<|pad|>
Vocabulary Size65536
Torch Data Typebfloat16

Rank the Bagel Jamba V05 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217