Model by taradepan

 ยป  All LLMs  ยป  taradepan  ยป  Model   URL Share it on

  4bit   Autotrain compatible Base model:unsloth/gemma-7b-bn...   En   Endpoints compatible   Gemma   License:apache-2.0   Pytorch   Quantized   Region:us   Sft   Sharded   Trl   Unsloth

Rank the Model Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Model (taradepan/model)

Best Alternatives to Model

Best Alternatives
HF Rank
SeaLLM 7B V2.5 4bit60.68K / 5.6 GB100
Train0655.98K / 9.1 GB24220
Train0755.98K / 17.1 GB7530
CNCF55.98K / 5.3 GB7870
...t Cleaner Gemma 32k Merged 16b31K / 17.1 GB100
Codegemma 1.1 7B It 4bit8K / 4.8 GB41
Gemma 7B Bnb 4bit8K / 5.6 GB1762214
Gemma 7B It Bnb 4bit8K / 5.6 GB719313
Gemma 1.1 7B It Bnb 4bit8K / 5.6 GB11712
Gemma Az8K / 5.6 GB22
Note: green Score (e.g. "73.2") means that the model is better than taradepan/model.

Model Parameters and Internals

LLM NameModel
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Gemma 7B Bnb 4bit   unsloth/gemma-7b-bnb-4bit
Model Size7b
Required VRAM17.1 GB
Model Typegemma
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   5.0 GB: 3-of-4   2.1 GB: 4-of-4
Supported Languagesen
Quantization Type4bit
Model ArchitectureGemmaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.41.2
Tokenizer ClassGemmaTokenizer
Padding Token<pad>
Vocabulary Size256000
Initializer Range0.02
Torch Data Typefloat16

What open-source LLMs or SLMs are you in search of? 34902 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801