Mixtral 8x7B Instruct V0.1 by dfurman

 ยป  All LLMs  ยป  dfurman  ยป  Mixtral 8x7B Instruct V0.1   URL Share it on

  Arxiv:2310.06825   Adapter Base model:mistralai/mixtral-8...   Conversational Dataset:garage-baind/open-plat... Dataset:jondurbin/airoboros-2....   Dataset:open-orca/slimorca   Finetuned   Instruct   License:apache-2.0   Lora   Mistral   Model-index   Moe   Peft   Region:us   Safetensors

Rank the Mixtral 8x7B Instruct V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Mixtral 8x7B Instruct V0.1 (dfurman/Mixtral-8x7B-Instruct-v0.1)

Best Alternatives to Mixtral 8x7B Instruct V0.1

Best Alternatives
HF Rank
WizardLM LlaMA LoRA 130K / 0 GB013
Gigasaiga Lora0K / 0 GB07
Bloomz 7b1 Instruct0K / 0 GB04
Caramelinho0K / 0 GB93
...m 6b4 Clp German Instruct Lora0K / 0 GB02
... Clp German Instruct Lora Peft0K / 0 GB01
GeoV Instruct LoRA0K / 0 GB01
...zardLM LlaMA LoRA 13bbbaaaaddd0K / 0 GB01
Phi 2 Instruction0K / 0 GB2320
Mixtral DPO 10000K / 0 GB60

Mixtral 8x7B Instruct V0.1 Parameters and Internals

LLM NameMixtral 8x7B Instruct V0.1
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Mixtral 8x7B V0.1   mistralai/Mixtral-8x7B-v0.1
Required VRAM0.1 GB
Model Files  0.1 GB
Model ArchitectureAdapter
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
LoRA ModelYes
PEFT Target Moduleso_proj|v_proj|k_proj|q_proj
LoRA Alpha16
LoRA Dropout0.1
R Param64

What open-source LLMs or SLMs are you in search of? 36560 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024040901