SeaLLM 7B V2.5 4bit by saksornr

 ยป  All LLMs  ยป  saksornr  ยป  SeaLLM 7B V2.5 4bit   URL Share it on

  4-bit   4bit Base model:seallms/seallm-7b-v...   Bitsandbytes   Conversational   En   Gemma   License:apache-2.0   Quantized   Region:us   Safetensors   Th Transformers accelerate bitsan...   Trl   Unsloth

Rank the SeaLLM 7B V2.5 4bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
SeaLLM 7B V2.5 4bit (saksornr/SeaLLM-7B-v2.5-4bit)

Best Alternatives to SeaLLM 7B V2.5 4bit

Best Alternatives
HF Rank
...t Cleaner Gemma 32k Merged 16b31K / 17.1 GB90
...ruct Malayalam Model Vllm 4bit16K / 5.6 GB290
... Codegemma 7B HQQ 1bit Smashed8K / 2.7 GB50
Codegemma 1.1 7B It 4bit8K / 4.8 GB71
Gemma 7B 3.0bpw H6 EXL28K / 5.1 GB70
Gemma 7B It 3.0bpw H6 EXL28K / 5.1 GB60
Gemma 7B Bnb 4bit8K / 5.6 GB1013315
Gemma 7B It Bnb 4bit8K / 5.6 GB333313
Gemma 1.1 7B It Bnb 4bit8K / 5.6 GB4082
Gemma Az8K / 5.6 GB22

SeaLLM 7B V2.5 4bit Parameters and Internals

LLM NameSeaLLM 7B V2.5 4bit
RepositoryOpen on ๐Ÿค— 
Base Model(s)  SeaLLM 7B V2.5   SeaLLMs/SeaLLM-7B-v2.5
Model Size7b
Required VRAM5.6 GB
Model Typegemma
Model Files  5.6 GB
Supported Languagesen th
Quantization Type4bit
Model ArchitectureGemmaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.40.1
Tokenizer ClassGemmaTokenizer
Padding Token<pad>
Vocabulary Size256000
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 36243 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801