MixTAO 7Bx2 MoE V8.1 by zhengr

 ยป  All LLMs  ยป  zhengr  ยป  MixTAO 7Bx2 MoE V8.1   URL Share it on

  Autotrain compatible   Conversational   Endpoints compatible   Mixtral   Model-index   Moe   Region:us   Safetensors   Sharded   Tensorflow

MixTAO 7Bx2 MoE V8.1 Benchmarks

MixTAO 7Bx2 MoE V8.1 (zhengr/MixTAO-7Bx2-MoE-v8.1)

MixTAO 7Bx2 MoE V8.1 Parameters and Internals

Model Type 
MoE, text generation
Additional Notes 
MixTAO-7Bx2-MoE is a Mixture of Experts model mainly used for large model technology experiments.
Input Output 
Input Format:
Alpaca Prompt Template
Accepted Modalities:
text
Output Format:
Text response
LLM NameMixTAO 7Bx2 MoE V8.1
Repository ๐Ÿค—https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1 
Model Size12.9b
Required VRAM25.8 GB
Updated2024-09-16
Maintainerzhengr
Model Typemixtral
Model Files  1.9 GB: 1-of-13   2.0 GB: 2-of-13   2.0 GB: 3-of-13   2.0 GB: 4-of-13   1.9 GB: 5-of-13   2.0 GB: 6-of-13   2.0 GB: 7-of-13   2.0 GB: 8-of-13   2.0 GB: 9-of-13   2.0 GB: 10-of-13   2.0 GB: 11-of-13   2.0 GB: 12-of-13   2.0 GB: 13-of-13
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.38.1
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size32000
Torch Data Typebfloat16

Quantized Models of the MixTAO 7Bx2 MoE V8.1

Model
Likes
Downloads
VRAM
MixTAO 7Bx2 MoE V8.1 GGUF11624 GB

Best Alternatives to MixTAO 7Bx2 MoE V8.1

Best Alternatives
Context / RAM
Downloads
Likes
MixTAO 7Bx2 MoE V8.132K / 25.8 GB958155
Inf Silent Kunoichi V0.1 2x7B32K / 25.6 GB50
Inf Silent Kunoichi V0.2 2x7B32K / 25.6 GB50
LogoS 7Bx2 MoE 13B V0.232K / 25.9 GB385210
MultiMash8 13B Slerp32K / 25.7 GB280
MultiMash9 13B Slerp32K / 25.7 GB260
MultiMash11 13B Slerp32K / 25.7 GB210
MultiMash10 13B Slerp32K / 25.7 GB260
Multimash3 12B Slerp32K / 25.7 GB290
MixTaoTruthful 13B Slerp32K / 25.7 GB190
Note: green Score (e.g. "73.2") means that the model is better than zhengr/MixTAO-7Bx2-MoE-v8.1.

Rank the MixTAO 7Bx2 MoE V8.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41418 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227