Mistral FreeLiPPA LoRA 12B by mpasila

 ยป  All LLMs  ยป  mpasila  ยป  Mistral FreeLiPPA LoRA 12B   URL Share it on

Base model:adapter:mistralai/m... Base model:mistralai/mistral-n... Dataset:grimulkan/limarp-augme... Dataset:karakarawitch/pippa-sh... Dataset:mpasila/limarp-pippa-f...   Dataset:openerotica/freedom-rp   En   Lora   Mistral   Peft   Region:us   Safetensors   Trl   Unsloth

Mistral FreeLiPPA LoRA 12B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mistral FreeLiPPA LoRA 12B (mpasila/Mistral-freeLiPPA-LoRA-12B)

Mistral FreeLiPPA LoRA 12B Parameters and Internals

Model Type 
text generation inference
Additional Notes 
This model was trained 2x faster with Unsloth and Huggingface's TRL library.
Supported Languages 
en (English)
Training Details 
Data Sources:
mpasila/LimaRP-PIPPA-freedom-rp-Mix-8K, grimulkan/LimaRP-augmented, KaraKaraWitch/PIPPA-ShareGPT-formatted, openerotica/freedom-rp
Methodology:
LoRA trained in 4-bit with 8k context for 1 epoch
Context Length:
8000
Model Architecture:
LoRA merged
Input Output 
Input Format:
ChatML
LLM NameMistral FreeLiPPA LoRA 12B
Repository ๐Ÿค—https://huggingface.co/mpasila/Mistral-freeLiPPA-LoRA-12B 
Base Model(s)  Mistral Nemo Base 2407   mistralai/Mistral-Nemo-Base-2407
Model Size12b
Required VRAM1.8 GB
Updated2025-01-20
Maintainermpasila
Model Files  1.8 GB
Supported Languagesen
Model ArchitectureAutoModel
Licenseapache-2.0
Is Biasednone
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<pad>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesgate_proj|up_proj|down_proj|q_proj|o_proj|v_proj|k_proj
LoRA Alpha256
LoRA Dropout0
R Param128

Best Alternatives to Mistral FreeLiPPA LoRA 12B

Best Alternatives
Context / RAM
Downloads
Likes
...tral Nemo 12B Abliterated LORA0K / 0.5 GB01
...ast Pythia 12B Sft V8 7K Steps0K / 23.7 GB201
Ct2fast M2m100 12B Last Ckpt0K / 23.6 GB176
Ct2fast Dolly V2 12B0K / 11.9 GB83
Llama3 12B Wwe GGUF0K / 5.3 GB1300
Calme 12B Instruct V0.1 GGUF0K / 4.7 GB362
Merlyn Education Safety GGUF0K / 4.9 GB731
Dolly V2 GGML0K / 1.6 GB402
Note: green Score (e.g. "73.2") means that the model is better than mpasila/Mistral-freeLiPPA-LoRA-12B.

Rank the Mistral FreeLiPPA LoRA 12B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41636 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227