๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
...1 Fine Tuned Using Ludwig 4bit | 60.1 | 0K / 0 GB | 1 | 1 |
Dragoman | 60.1 | 0K / 2.7 GB | 301 | 10 |
Mistral Chess | 60.1 | 0K / 2.7 GB | 94 | 0 |
Mistral Finetuned DialogSumm | 47.9 | 0K / 0 GB | 7 | 1 |
...1 Fine Tuned Using Ludwig 4bit | 40.4 | 0K / 0 GB | 1 | 1 |
Falcon 7B Spanish 8bit | — | 0K / 0 GB | 6 | 3 |
Llama 2 7B Hf Codealpaca 4bit | — | 0K / 0 GB | 1 | 1 |
Falcon 7B Instruct 4bit Lora | — | 0K / 0 GB | 0 | 1 |
Gemma7b SFT | — | 0K / 0 GB | 10 | 0 |
Llama 2 7B Chat Vicuna Hf 4bit | — | 0K / 0.1 GB | 0 | 6 |
LLM Name | Gemma 7B Us Minecraft |
Repository | Open on ๐ค |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 0 GB |
Updated | 2024-07-05 |
Maintainer | emre570 |
Model Files | |
Quantization Type | 4bit |
Model Architecture | Adapter |
Is Biased | none |
Tokenizer Class | GemmaTokenizer |
Padding Token | <pad> |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | v_proj|o_proj|gate_proj|q_proj|up_proj|k_proj|down_proj |
LoRA Alpha | 32 |
LoRA Dropout | 0 |
R Param | 32 |