Gemma2b Based Finetuned Using Ludwig With Tldrnews Summarization T4 4bit Notmerged by alexsherstinsky

 ยป  All LLMs  ยป  alexsherstinsky  ยป  Gemma2b Based Finetuned Using Ludwig With Tldrnews Summarization T4 4bit Notmerged   URL Share it on

  Arxiv:1910.09700   4bit   Adapter Base model:adapter:google/gemm...   Base model:google/gemma-2b-it   Finetuned   Lora   Peft   Quantized   Region:us   Safetensors

Gemma2b Based Finetuned Using Ludwig With Tldrnews Summarization T4 4bit Notmerged Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Gemma2b Based Finetuned Using Ludwig With Tldrnews Summarization T4 4bit Notmerged (alexsherstinsky/gemma2b-based-finetuned-using-ludwig-with-tldrnews-summarization-T4-4bit-notmerged)

Gemma2b Based Finetuned Using Ludwig With Tldrnews Summarization T4 4bit Notmerged Parameters and Internals

LLM NameGemma2b Based Finetuned Using Ludwig With Tldrnews Summarization T4 4bit Notmerged
Repository ๐Ÿค—https://huggingface.co/alexsherstinsky/gemma2b-based-finetuned-using-ludwig-with-tldrnews-summarization-T4-4bit-notmerged 
Base Model(s)  Gemma 2B It   google/gemma-2b-it
Model Size2b
Required VRAM0 GB
Updated2024-11-09
Maintaineralexsherstinsky
Model Files  0.0 GB
Quantization Type4bit
Model ArchitectureAdapter
Is Biasednone
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesq_proj|v_proj
LoRA Alpha16
LoRA Dropout0.05
R Param8

Best Alternatives to Gemma2b Based Finetuned Using Ludwig With Tldrnews Summarization T4 4bit Notmerged

Best Alternatives
Context / RAM
Downloads
Likes
...og Summerizer Gemma 2B It 4bit0K / 0.2 GB100
Finetuned Gemma38K / 5.1 GB50
Phi Gemma Nlaf V10K / 0.1 GB50
Phi Gemma Nlaf V00K / 0.1 GB60
Gemma 2B It Nlai P10K / 0 GB50
German 2B Lora 6K0K / 0 GB80
Ger Lora 3K Checkpoint0K / 0 GB80
1 8K Adater Ger0K / 0 GB60
2B Lora Adapter Llama Alpaca0K / 0.1 GB70
Google Gemma 2B 17198825710K / 0 GB60
Note: green Score (e.g. "73.2") means that the model is better than alexsherstinsky/gemma2b-based-finetuned-using-ludwig-with-tldrnews-summarization-T4-4bit-notmerged.

Rank the Gemma2b Based Finetuned Using Ludwig With Tldrnews Summarization T4 4bit Notmerged Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217