LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Bloom 6b4 Clp German by malteos

What open-source LLMs or SLMs are you in search of? 18870 in total.

 ยป  All LLMs  ยป  malteos  ยป  Bloom 6b4 Clp German   URL Share it on

  Arxiv:2301.09626   Bloom   Dataset:oscar   De   Endpoints compatible   Ggml   Has space License:bigscience-bloom-rail-...   Pytorch   Quantized   Region:us

Rank the Bloom 6b4 Clp German Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Bloom 6b4 Clp German (malteos/bloom-6b4-clp-german)

Best Alternatives to Bloom 6b4 Clp German

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
FusionNet 34Bx2 MoE GGUF67.60K / 22.4 GB46
...AO 7Bx2 MoE Instruct V7.0 GGUF67.10K / 4.8 GB2539
Helion 4x34B GGUF66.20K / 41.5 GB13
Cosmosis 3x34B GGUF66.10K / 31.9 GB15
...Top 5x7B Instruct S5 V0.1 GGUF65.90K / 2.7 GB41
Go Bruins V2.1.1 GGUF65.70K / 3.1 GB27
Quantum DPO V0.1 GGUF65.70K / 3.1 GB21
Quantum V0.01 GGUF65.50K / 3.1 GB22
Sakura SOLAR Instruct GGUF65.20K / 4.5 GB55
...rautLM UNA SOLAR Instruct GGUF65.10K / 4.5 GB1011
Note: green Score (e.g. "73.2") means that the model is better than malteos/bloom-6b4-clp-german.

Bloom 6b4 Clp German Parameters and Internals

LLM NameBloom 6b4 Clp German
RepositoryOpen on ๐Ÿค— 
Base Model(s)  GermanGPT Dolly Lora 1b5   aari1995/GermanGPT_dolly_lora_1b5
Required VRAM0.4 GB
Updated2024-02-29
Maintainermalteos
Model Typebloom
Model Files  12.9 GB   0.4 GB: 1-of-32   0.4 GB: 2-of-32   0.4 GB: 3-of-32   0.4 GB: 4-of-32   0.4 GB: 5-of-32   0.4 GB: 6-of-32   0.4 GB: 7-of-32   0.4 GB: 8-of-32   0.4 GB: 9-of-32   0.4 GB: 10-of-32   0.4 GB: 11-of-32   0.4 GB: 12-of-32   0.4 GB: 13-of-32   0.4 GB: 14-of-32   0.4 GB: 15-of-32   0.4 GB: 16-of-32   0.4 GB: 17-of-32   0.4 GB: 18-of-32   0.4 GB: 19-of-32   0.4 GB: 20-of-32   0.4 GB: 21-of-32   0.4 GB: 22-of-32   0.4 GB: 23-of-32   0.4 GB: 24-of-32   0.4 GB: 25-of-32   0.4 GB: 26-of-32   0.4 GB: 27-of-32   0.4 GB: 28-of-32   0.4 GB: 29-of-32   0.4 GB: 30-of-32   0.4 GB: 31-of-32   0.0 GB: 32-of-32
Supported Languagesde
GGML QuantizationYes
Quantization Typeggml
Model ArchitectureAutoModel
Licensebigscience-bloom-rail-1.0
Transformers Version4.24.0.dev0
Tokenizer ClassGPT2Tokenizer
Vocabulary Size50304
Initializer Range0.02
Layer Norm Epsilon1.0E-5
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003