Mixtral 8x7b MonsterInstruct by Zangs3011

 ยป  All LLMs  ยป  Zangs3011  ยป  Mixtral 8x7b MonsterInstruct   URL Share it on

  Arxiv:1910.09700   Adapter Base model:mistralai/mixtral-8... Dataset:qblocks/monsterinstruc...   Finetuned   License:apache-2.0   Lora   Moe   Peft   Region:us   Safetensors

Rank the Mixtral 8x7b MonsterInstruct Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Mixtral 8x7b MonsterInstruct (Zangs3011/mixtral_8x7b_MonsterInstruct)

Best Alternatives to Mixtral 8x7b MonsterInstruct

Best Alternatives
HF Rank
Alpaca13B Lora0K / 0 GB033
Dolly Lora0K / 0 GB025
Aurora0K / 0 GB020
Gpt4all J Lora0K / 0 GB018
Alpaca0K / 0 GB014
WizardLM LlaMA LoRA 130K / 0 GB013
Alpaca7B Lora0K / 0 GB08
Alpaca Lora German0K / 0 GB08
Jamba Chat0K / 0 GB37
Gigasaiga Lora0K / 0 GB07

Mixtral 8x7b MonsterInstruct Parameters and Internals

LLM NameMixtral 8x7b MonsterInstruct
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Mixtral 8x7B V0.1   mistralai/Mixtral-8x7B-v0.1
Required VRAM1 GB
Model Files  1 GB
Model ArchitectureAdapter
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token</s>
LoRA ModelYes
PEFT Target Modulesw3|o_proj|w1|w2|q_proj|k_proj|v_proj
LoRA Alpha32
LoRA Dropout0
R Param16

What open-source LLMs or SLMs are you in search of? 36560 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024040901