TinyQwex 4x620M MoE by Isotonic

 ยป  All LLMs  ยป  Isotonic  ยป  TinyQwex 4x620M MoE   URL Share it on

  Autotrain compatible   Conversational   Endpoints compatible   Lazymergekit   License:apache-2.0   Merge   Mergekit   Mixtral   Moe   Qwen/qwen1.5-0.5b   Region:us   Safetensors   Sharded   Tensorflow

Rank the TinyQwex 4x620M MoE Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
TinyQwex 4x620M MoE (Isotonic/TinyQwex-4x620M-MoE)

Best Alternatives to TinyQwex 4x620M MoE

Best Alternatives
HF Rank
Qwen1.5 4x0.5B MoE32K / 2.5 GB530
Verysmol Llama V11 KIx2 64x58M1K / 5 GB50

TinyQwex 4x620M MoE Parameters and Internals

LLM NameTinyQwex 4x620M MoE
RepositoryOpen on ๐Ÿค— 
Model Size1.2b
Required VRAM2.5 GB
Model Typemixtral
Model Files  2.5 GB: 1-of-1
Model ArchitectureMixtralForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.39.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 35526 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20240042001