LLM Name | SeaLLM 7B V2.5 AWQ |
Repository ๐ค | https://huggingface.co/NghiemAbe/SeaLLM-7B-v2.5-AWQ |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 5.6 GB |
Updated | 2024-12-03 |
Maintainer | NghiemAbe |
Model Type | gemma |
Model Files | |
Supported Languages | en zh vi id th ms km lo my tl |
AWQ Quantization | Yes |
Quantization Type | awq |
Model Architecture | GemmaForCausalLM |
License | other |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.39.3 |
Tokenizer Class | GemmaTokenizer |
Padding Token | <pad> |
Vocabulary Size | 256000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Codegemma 7B AWQ | 8K / 7.2 GB | 9 | 0 |
SeaLLM 7B V2.5 AWQ | 8K / 7.2 GB | 205 | 2 |
Gemma 1.1 7B It AWQ | 8K / 7.2 GB | 8 | 0 |
CodeGemma 7B AWQ | 8K / 7.2 GB | 8 | 0 |
Gemma Ko 7B AWQ | 8K / 5.6 GB | 10 | 0 |
Codegemma 1.1 7B It AWQ | 8K / 7.2 GB | 10 | 0 |
Gemma 7B It AWQ | 8K / 7.2 GB | 50 | 2 |
Gemma 7B AWQ | 8K / 7.2 GB | 14 | 0 |
Gemma 7B It AWQ | 8K / 7.2 GB | 5 | 0 |
...t Cleaner Gemma 32k Merged 16b | 31K / 17.1 GB | 11 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐