Gemma 2B Hinglish LORA V1.0 by kirankunapuli

 ยป  All LLMs  ยป  kirankunapuli  ยป  Gemma 2B Hinglish LORA V1.0   URL Share it on

  4bit   Autotrain compatible Base model:finetune:unsloth/ge... Base model:unsloth/gemma-2b-bn...   Conversational Dataset:hydraindiclm/hindi alp... Dataset:ravithejads/samvaad-hi...   Dataset:yahma/alpaca-cleaned   En   Endpoints compatible   Gemma   Hi   Lora   Pytorch   Quantized   Region:us   Safetensors   Sharded   Trl   Unsloth

Gemma 2B Hinglish LORA V1.0 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Gemma 2B Hinglish LORA V1.0 (kirankunapuli/Gemma-2B-Hinglish-LORA-v1.0)

Gemma 2B Hinglish LORA V1.0 Parameters and Internals

Model Type 
text-generation, transformers, unsloth, gemma, trl
Use Cases 
Areas:
e.g., research, commercial applications
Additional Notes 
The model was trained 2x faster using Unsloth and Huggingface's TRL library.
Supported Languages 
en (English), hi (Hindi)
Training Details 
Data Sources:
yahma/alpaca-cleaned, ravithejads/samvaad-hi-filtered, HydraIndicLM/hindi_alpaca_dolly_67k
Data Volume:
14,343
Methodology:
LORA finetuning
Training Time:
2118.7553 seconds (35.31 minutes)
Hardware Used:
Tesla T4. Max memory = 14.748 GB.
Model Architecture:
Finetuned from Unsloth/gemma-2b-bnb-4bit using LORA.
LLM NameGemma 2B Hinglish LORA V1.0
Repository ๐Ÿค—https://huggingface.co/kirankunapuli/Gemma-2B-Hinglish-LORA-v1.0 
Base Model(s)  Gemma 2B Bnb 4bit   unsloth/gemma-2b-bnb-4bit
Model Size2b
Required VRAM5.1 GB
Updated2024-12-22
Maintainerkirankunapuli
Model Files  0.1 GB   5.0 GB: 1-of-2   0.1 GB: 2-of-2
Supported Languagesen hi
Quantization Type4bit
Model ArchitectureAutoModelForCausalLM
Licensegemma
Model Max Length8192
Is Biasednone
Tokenizer ClassGemmaTokenizer
Padding Token<pad>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Moduleso_proj|down_proj|q_proj|v_proj|k_proj|up_proj|gate_proj
LoRA Alpha32
LoRA Dropout0
R Param16

Best Alternatives to Gemma 2B Hinglish LORA V1.0

Best Alternatives
Context / RAM
Downloads
Likes
Hellfire 2B0K / 0.3 GB27091
Gemma 1.1 QG V.2.00K / 5.1 GB80
Gemma2 Financial0K / 5.1 GB241
...Summarizer 2B It LORA Bnb 4bit0K / 0.1 GB341
Gemma 2 Baku 2B8K / 10.5 GB37126
...emma 2B Lora Lbox Ljp Modified0K / 10 GB110
Gemma 2B Orca Mwp Lora V0.20K / 0.2 GB90
...hogpt Ft Qna Qchv 2021 Test Gr0K / 0.1 GB120
Quantized Mcqa Model0K / 5.8 GB170
Phogpt Ft Qna Qchv 20210K / 0.1 GB140
Note: green Score (e.g. "73.2") means that the model is better than kirankunapuli/Gemma-2B-Hinglish-LORA-v1.0.

Rank the Gemma 2B Hinglish LORA V1.0 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217