Jamba Bagel 4bit by KnutJaegersberg

 ยป  All LLMs  ยป  KnutJaegersberg  ยป  Jamba Bagel 4bit   URL Share it on

  4-bit   4bit   Autotrain compatible   Base model:ai21labs/jamba-v0.1 Base model:quantized:ai21labs/...   Bitsandbytes   Custom code   Dataset:ai2 arc Dataset:allenai/ultrafeedback ... Dataset:argilla/distilabel-int... Dataset:b-mc2/sql-create-conte... Dataset:bluemoon-fandom-1-1-rp...   Dataset:boolq   Dataset:cakiki/rosetta-code   Dataset:camel-ai/biology   Dataset:camel-ai/chemistry   Dataset:camel-ai/math   Dataset:camel-ai/physics   Dataset:codeparrot/apps   Dataset:facebook/belebele Dataset:glaiveai/glaive-functi... Dataset:grimulkan/limarp-augme... Dataset:jondurbin/airoboros-3.... Dataset:jondurbin/cinematika-v... Dataset:jondurbin/contextual-d... Dataset:jondurbin/gutenberg-dp...   Dataset:jondurbin/py-dpo-v0.1 Dataset:jondurbin/truthy-dpo-v...   Dataset:kingbri/pippa-sharegpt   Dataset:ldjnr/capybara   Dataset:lmsys/lmsys-chat-1m Dataset:mattpscott/airoboros-s... Dataset:migtissera/synthia-v1.... Dataset:muennighoff/natural-in...   Dataset:open-orca/slimorca   Dataset:openbookqa Dataset:parisneo/lollms aware ...   Dataset:piqa   Dataset:ropes   Dataset:squad v2   Dataset:tiger-lab/mathinstruct Dataset:unalignment/toxic-dpo-... Dataset:vezora/tested-22k-pyth... Dataset:whiterabbitneo/wrn-cha... Dataset:whiterabbitneo/wrn-cha...   Dataset:winogrande Dataset:wizardlm/wizardlm evol...   Endpoints compatible   Instruct   Jamba   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Jamba Bagel 4bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Jamba Bagel 4bit (KnutJaegersberg/jamba-bagel-4bit)

Jamba Bagel 4bit Parameters and Internals

Model Type 
fine-tune
Use Cases 
Areas:
research, commercial applications
Applications:
instruction following, question answering, text generation
Primary Use Cases:
Assisting in diverse tasks with varied prompt formats, Handling context-specific question answering
Limitations:
Potential bias due to training on various datasets. Lack of deep domain-specific knowledge.
Considerations:
Choose appropriate prompt template for specific instructions.
Additional Notes 
Designed to support various prompting strategies including context obedience, multi-format prompts, SQL queries, emotion detection, and function calling.
Training Details 
Data Sources:
ai2_arc, airoboros, apps, belebele, bluemoon, boolq, camel-ai biology, camel-ai chemistry, camel-ai math, camel-ai physics, capybara, cinematika, emobank, evol-instruct, glaive-function-calling-v2, gutenberg, limarp-augmented, lmsys_chat_1m, lollms, mathinstruct, natural_instructions, openbookqa, pippa, piqa, python_alpaca, ropes, rosetta_code, slimorca, sql-create-context, squad_v2, airoboros-summarization, synthia, whiterabbitneo chapter 1, whiterabbitneo chapter 2, winogrande, airoboros 3.2, contextual-dpo, helpsteer, distilabel_orca_dpo_pairs, gutenberg-dpo, py-dpo, toxic-dpo, truthy, ultrafeedback
Methodology:
Fine-tuning with specific SFT and DPO datasets. Used diverse datasets for instruction tuning and context-obedient question answering.
Hardware Used:
A6000 GPUs
Input Output 
Input Format:
Multiple prompt formats: vicuna, llama-2, alpaca, chat-ml.
Accepted Modalities:
text
Output Format:
Text response based on prompt template.
Performance Tips:
Use low temperature settings for context-obedient question answering for best results.
LLM NameJamba Bagel 4bit
Repository ๐Ÿค—https://huggingface.co/KnutJaegersberg/jamba-bagel-4bit 
Base Model(s)  ai21labs/Jamba-v0.1   ai21labs/Jamba-v0.1
Model Size26.9b
Required VRAM29.9 GB
Updated2024-12-22
MaintainerKnutJaegersberg
Model Typejamba
Instruction-BasedYes
Model Files  5.0 GB: 1-of-6   5.0 GB: 2-of-6   5.0 GB: 3-of-6   5.0 GB: 4-of-6   5.0 GB: 5-of-6   4.9 GB: 6-of-6
Quantization Type4bit
Model ArchitectureJambaForCausalLM
Licenseapache-2.0
Transformers Version4.37.0
Tokenizer ClassLlamaTokenizer
Padding Token<|pad|>
Vocabulary Size65536
Torch Data Typebfloat16

Rank the Jamba Bagel 4bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217