Bloomz 1b7 by bigscience

 ยป  All LLMs  ยป  bigscience  ยป  Bloomz 1b7   URL Share it on

  Arxiv:2211.01786   Ak   Ar   As   Autotrain compatible   Bloom   Bm   Bn   Ca   Code   Dataset:bigscience/xp3   En   Endpoints compatible   Es   Eu   Fon   Fr   Gu   Hi   Id   Ig   Ki   Kn   Lg   Ln   Ml   Model-index   Mr   Ne   Nso   Ny   Or   Pa   Pt   Pytorch   Region:us   Rn   Rw   Safetensors   Sn   St   Sw   Ta   Te   Tensorboard   Tn   Ts   Tum   Tw   Ur   Vi   Wo   Xh   Yo   Zh   Zu
Model Card on HF ๐Ÿค—: https://huggingface.co/bigscience/bloomz-1b7 

Bloomz 1b7 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Bloomz 1b7 (bigscience/bloomz-1b7)

Bloomz 1b7 Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
text generation
Applications:
research, multilingual tasks
Primary Use Cases:
translation
Additional Notes 
Model is capable of zero-shot crosslingual generalization to unseen tasks.
Supported Languages 
ak (unknown), ar (unknown), as (unknown), bm (unknown), bn (unknown), ca (unknown), code (unknown), en (unknown), es (unknown), eu (unknown), fon (unknown), fr (unknown), gu (unknown), hi (unknown), id (unknown), ig (unknown), ki (unknown), kn (unknown), lg (unknown), ln (unknown), ml (unknown), mr (unknown), ne (unknown), nso (unknown), ny (unknown), or (unknown), pa (unknown), pt (unknown), rn (unknown), rw (unknown), sn (unknown), st (unknown), sw (unknown), ta (unknown), te (unknown), tn (unknown), ts (unknown), tum (unknown), tw (unknown), ur (unknown), vi (unknown), wo (unknown), xh (unknown), yo (unknown), zh (unknown), zu (unknown), C (unknown), C++ (unknown), C# (unknown), Go (unknown), Java (unknown), JavaScript (unknown), Lua (unknown), PHP (unknown), Python (unknown), Ruby (unknown), Rust (unknown), Scala (unknown), TypeScript (unknown)
Training Details 
Data Sources:
bigscience/xP3, Muennighoff/P3
Data Volume:
8.39 billion tokens
Methodology:
Multitask finetuning on crosslingual task mixture (xP3)
Hardware Used:
128 A100 80GB GPUs
Model Architecture:
Same as bloom-3b
Input Output 
Input Format:
text
Accepted Modalities:
text
Output Format:
text
Performance Tips:
The performance may vary depending on the prompt. It is recommended to make it clear when the input stops to avoid the model trying to continue it.
LLM NameBloomz 1b7
Repository ๐Ÿค—https://huggingface.co/bigscience/bloomz-1b7 
Model Size1.7b
Required VRAM3.4 GB
Updated2024-12-22
Maintainerbigscience
Model Typebloom
Model Files  3.4 GB   3.4 GB
Supported Languagesak ar as bm bn ca code en es eu fr gu hi id ig ki kn lg ln ml mr ne ny or pa pt rn rw sn st sw ta te tn ts tw ur vi wo xh yo zh zu
Model ArchitectureBloomForCausalLM
Licensebigscience-bloom-rail-1.0
Transformers Version4.20.0
Tokenizer ClassBloomTokenizerFast
Padding Token<pad>
Vocabulary Size250880

Best Alternatives to Bloomz 1b7

Best Alternatives
Context / RAM
Downloads
Likes
Merged DPO Model0K / 6.8 GB130
Bloom200K / 3.4 GB190
Mnlp DPO Model Bloom0K / 6.8 GB120
Bloom 1b70K / 3.4 GB72227119
Aira 2 Portuguese 1B70K / 0 GB822
Bloom 1b7 Intermediate0K / 3.4 GB280
Bloom 1b7 8bit0K / 2.2 GB5606
Note: green Score (e.g. "73.2") means that the model is better than bigscience/bloomz-1b7.

Rank the Bloomz 1b7 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217