LoRA Llama 3 MLP by secretmoon

 ยป  All LLMs  ยป  secretmoon  ยป  LoRA Llama 3 MLP   URL Share it on

  Adapter Base model:adapter:sao10k/l3-8... Base model:sao10k/l3-8b-stheno...   Conversational   En   Finetuned   Lora   Peft   Region:us   Safetensors

LoRA Llama 3 MLP Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
LoRA Llama 3 MLP (secretmoon/LoRA-Llama-3-MLP)

LoRA Llama 3 MLP Parameters and Internals

Model Type 
LoRA adapter for text-generative applications
Use Cases 
Areas:
Fan fiction, Role-playing scenarios, Creative projects
Applications:
Text generation in the MLP:FiM universe
Primary Use Cases:
Generating narratives based on My Little Pony: Friendship is Magic
Additional Notes 
LoRA adapter instructions for influencing model behavior detailed in Recommendations for LoRA Alpha settings.
Supported Languages 
en (Primary)
Training Details 
Data Sources:
Cleaned copy of the MLP Fandom Wiki (Alpaca), Approximately 100 specially selected fan stories from FiMFiction (RAW text), Additional data for personal assistant training (Alpaca)
Methodology:
LoRA 8-bit fine-tuning with special focus on MLP:FiM universe content
Context Length:
6144
Training Time:
3 hours
Hardware Used:
1 x NVIDIA RTX A6000 48GB
LLM NameLoRA Llama 3 MLP
Repository ๐Ÿค—https://huggingface.co/secretmoon/LoRA-Llama-3-MLP 
Base Model(s)  Sao10K/L3-8B-Stheno-v3.1   Sao10K/L3-8B-Stheno-v3.1
Model Size8b
Required VRAM0 GB
Updated2025-02-22
Maintainersecretmoon
Model Files  2.7 GB   0.0 GB   0.0 GB
Supported Languagesen
Model ArchitectureAdapter
Licensecc-by-nc-4.0
Is Biasednone
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesup_proj|k_proj|q_proj|v_proj|down_proj|o_proj|gate_proj
LoRA Alpha48
LoRA Dropout0.04
R Param256

Best Alternatives to LoRA Llama 3 MLP

Best Alternatives
Context / RAM
Downloads
Likes
... 3 8B Instruct Bvr Finetune V38K / 16.1 GB50
Flippa V60K / 0 GB91
Llama 3 Korean 8B R V 0.10K / 0 GB70
...a7 4262 4abb 97b1 1879f340d32e0K / 0.3 GB220
Llama 3.1 8B Smart Lora0K / 0.2 GB01
...ultiModal Llama 3 8B Finetuned0K / 0 GB151
...lama 3 1 8B Instruct Orca ORPO0K / 0.1 GB142
FatDPOv2LoRA0K / 0.8 GB41
Adapter Test0K / 0.1 GB60
Vortex20K / 4.4 GB80
Note: green Score (e.g. "73.2") means that the model is better than secretmoon/LoRA-Llama-3-MLP.

Rank the LoRA Llama 3 MLP Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227