LLM Name | Gemma 3 1B Pt Peft Dare |
Repository ๐ค | https://huggingface.co/swarup3204/gemma-3-1b-pt-peft-dare |
Base Model(s) | |
Merged Model | Yes |
Model Size | 1b |
Required VRAM | 2 GB |
Updated | 2025-04-28 |
Maintainer | swarup3204 |
Model Type | gemma3_text |
Model Files | |
Model Architecture | Gemma3ForCausalLM |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.50.3 |
Tokenizer Class | GemmaTokenizer |
Padding Token | <pad> |
Vocabulary Size | 262144 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Gemma 3 1B It | 32K / 2 GB | 2000503 | 345 |
Gemma 3 1B Pt | 32K / 2 GB | 172037 | 108 |
Gemma 3 1B It | 32K / 2 GB | 48876 | 8 |
...a 3 1B It Qat Int4 Unquantized | 32K / 2 GB | 555 | 3 |
Gemma 3 1B Pt | 32K / 2 GB | 1958 | 3 |
Gemma 3 1B It ONNX | 32K / GB | 581 | 13 |
Gemma 3 1B Pt Peft | 32K / 2 GB | 85 | 0 |
Gemma 3 1B Peft Codealpaca | 32K / 4 GB | 41 | 0 |
POLAR Gemma 1b | 32K / 2.6 GB | 44 | 0 |
Gemma 3 1B Codealpaca | 32K / 4 GB | 31 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐