Mos Mamba 6x130m Trainer by jonathanjordan21

 ยป  All LLMs  ยป  jonathanjordan21  ยป  Mos Mamba 6x130m Trainer   URL Share it on

  Adapter Base model:jonathanjordan21/mo...   Custom code   Finetuned   Generated from trainer   Lora   Moe   Mosmamba   Peft   Region:us   Safetensors   Tensorboard

Rank the Mos Mamba 6x130m Trainer Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Mos Mamba 6x130m Trainer (jonathanjordan21/mos-mamba-6x130m-trainer)

Mos Mamba 6x130m Trainer Parameters and Internals

LLM NameMos Mamba 6x130m Trainer
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Mos Mamba 6x130m Hf   jonathanjordan21/mos-mamba-6x130m-hf
Model Size144m
Required VRAM0.1 GB
Updated2024-07-13
Maintainerjonathanjordan21
Model Files  0.1 GB   0.6 GB   0.0 GB
Model ArchitectureAdapter
Is Biasednone
Tokenizer ClassGPTNeoXTokenizer
Padding Token<|endoftext|>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesin_proj|w3|w1|w0|gate|w4|w5|w2|out_proj|dt_proj
LoRA Alpha32
LoRA Dropout0
R Param32

What open-source LLMs or SLMs are you in search of? 36243 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801