Mixtral 8x7B Instruct V0.1 by dfurman

 ยป  All LLMs  ยป  dfurman  ยป  Mixtral 8x7B Instruct V0.1   URL Share it on

  Arxiv:2310.06825   Adapter Base model:adapter:mistralai/m... Base model:mistralai/mixtral-8...   Conversational Dataset:garage-baind/open-plat... Dataset:jondurbin/airoboros-2....   Dataset:open-orca/slimorca   Finetuned   Instruct   Lora   Mistral   Model-index   Moe   Peft   Region:us   Safetensors

Mixtral 8x7B Instruct V0.1 Parameters and Internals

LLM NameMixtral 8x7B Instruct V0.1
RepositoryOpen on ๐Ÿค— 
Base Model(s)  mistralai/Mixtral-8x7B-v0.1   mistralai/Mixtral-8x7B-v0.1
Required VRAM0.1 GB
Updated2024-07-27
Maintainerdfurman
Instruction-BasedYes
Model Files  0.1 GB
Model ArchitectureAdapter
Licenseapache-2.0
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Moduleso_proj|v_proj|k_proj|q_proj
LoRA Alpha16
LoRA Dropout0.1
R Param64
Mixtral 8x7B Instruct V0.1 (dfurman/Mixtral-8x7B-Instruct-v0.1)

Best Alternatives to Mixtral 8x7B Instruct V0.1

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Vfgf0.20K / 0 GB190
Test0.20K / 0 GB130
Results0.20K / 0 GB70
Phi 3 Mini QLoRA0.20K / 0 GB1530
Phi 3 Mini 4K Instruct Ru Lora0.20K / 0.1 GB190
Results Phi3 Medium 4k0.20K / 0.1 GB50
Results0.20K / 0.1 GB70
Phi 3 Mini 4K Instruct Sft0.20K / 0 GB120
Phi3AdapterModel0.20K / 0.1 GB70
Phi3 Mini 4K Qlora Adapter0.20K / 0 GB310
Note: green Score (e.g. "73.2") means that the model is better than dfurman/Mixtral-8x7B-Instruct-v0.1.

Rank the Mixtral 8x7B Instruct V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 34447 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024072501