Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES by Doctor-Shotgun

 ยป  All LLMs  ยป  Doctor-Shotgun  ยป  Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES   URL Share it on

  Merged Model   Arxiv:2306.01708   Arxiv:2311.03099   Autotrain compatible   Endpoints compatible   Instruct   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

Rank the Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES (Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES)

Quantized Models of the Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES

Model
Likes
Downloads
VRAM
....1 LimaRP ZLoss DARE TIES GPTQ6723 GB
....1 LimaRP ZLoss DARE TIES GGUF434015 GB
...0.1 LimaRP ZLoss DARE TIES AWQ335124 GB

Best Alternatives to Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Mixtral 8x7B Instruct V0.177.7532K / 93.6 GB5012883937
...lQA Mixtral 8x7B Instruct V0.132K / 43.3 GB92
Mixtral 8x7B Instruct V0.1 FP832K / 47.1 GB2261
...tral 8x7B Instruct V0.1 FP8 V232K / 47.1 GB1120
...tral 8x7B Instruct V0.1 FP8 V332K / 47.1 GB350
...tral 8x7B Instruct V0.1 FP8 V132K / 47.1 GB70
Mixtral Instruct ITR 8x7B32K / 91.4 GB11
Maid Yuzu V8 Alter32K / 91.7 GB12
Merge Mixtral Prometheus 8x7B32K / 91.9 GB2161
...ELT Mixtral 8x7B Instruct V0.132K / 92 GB13
Note: green Score (e.g. "73.2") means that the model is better than Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES.

Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES Parameters and Internals

LLM NameMixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES
RepositoryOpen on ๐Ÿค— 
Merged ModelYes
Model Size46.7b
Required VRAM93.5 GB
Updated2024-07-04
MaintainerDoctor-Shotgun
Model Typemixtral
Instruction-BasedYes
Model Files  10.0 GB: 1-of-10   10.0 GB: 2-of-10   10.0 GB: 3-of-10   10.0 GB: 4-of-10   10.0 GB: 5-of-10   10.0 GB: 6-of-10   10.0 GB: 7-of-10   10.0 GB: 8-of-10   9.9 GB: 9-of-10   3.6 GB: 10-of-10
Model ArchitectureMixtralForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 33742 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801