LLM Name | Google Gemma 7B 1717986247 |
Repository | Open on ๐ค |
Merged Model | Yes |
Model Size | 7b |
Required VRAM | 17.1 GB |
Updated | 2024-07-26 |
Maintainer | 0xfaskety |
Model Type | gemma |
Model Files | |
Model Architecture | GemmaForCausalLM |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.41.2 |
Tokenizer Class | GemmaTokenizer |
Padding Token | <pad> |
Vocabulary Size | 256000 |
Torch Data Type | float16 |
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Kaggle Math Model Gemma V1 | 0.2 | 12K / 17.1 GB | 7 | 0 |
Gemma 1.1 7B It | 0.5 | 8K / 17.1 GB | 66477 | 254 |
Zephyr 7B Gemma V0.1 | 0.3 | 8K / 17.1 GB | 548 | 121 |
Codegemma 7B It | 0.3 | 8K / 17.1 GB | 3775 | 180 |
DiscoPOP Zephyr 7B Gemma | 0.3 | 8K / 17.1 GB | 3517 | 27 |
Codegemma 7B | 0.3 | 8K / 17.1 GB | 2766 | 138 |
SauerkrautLM Gemma 7B | 0.3 | 8K / 17.1 GB | 5632 | 13 |
SeaLLM 7B V2.5 | 0.3 | 8K / 17.1 GB | 17751 | 47 |
Gemma 7B Custom Tokenizer Base | 0.3 | 8K / 34 GB | 214 | 0 |
Ge1H10M 0000 | 0.3 | 8K / 17.1 GB | 1816 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐