Qwen1.5 MoE A2.7B 20 Experts by Na0s

 ยป  All LLMs  ยป  Na0s  ยป  Qwen1.5 MoE A2.7B 20 Experts   URL Share it on

  Arxiv:1910.09700   Autotrain compatible Base model:finetune:qwen/qwen1... Base model:qwen/qwen1.5-moe-a2...   Conversational   En   Endpoints compatible   Moe   Qwen2 moe   Region:us   Safetensors   Sharded   Tensorflow

Qwen1.5 MoE A2.7B 20 Experts Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen1.5 MoE A2.7B 20 Experts (Na0s/Qwen1.5-MoE-A2.7B-20-experts)

Qwen1.5 MoE A2.7B 20 Experts Parameters and Internals

LLM NameQwen1.5 MoE A2.7B 20 Experts
Repository ๐Ÿค—https://huggingface.co/Na0s/Qwen1.5-MoE-A2.7B-20-experts 
Base Model(s)  Qwen/Qwen1.5-MoE-A2.7B   Qwen/Qwen1.5-MoE-A2.7B
Model Size5.2b
Required VRAM20.7 GB
Updated2024-12-15
MaintainerNa0s
Model Typeqwen2_moe
Model Files  5.0 GB: 1-of-5   5.0 GB: 2-of-5   5.0 GB: 3-of-5   4.5 GB: 4-of-5   1.2 GB: 5-of-5
Supported Languagesen
Model ArchitectureQwen2MoeForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.45.1
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typefloat32
Errorsreplace

Rank the Qwen1.5 MoE A2.7B 20 Experts Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 45494 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227