CeptrixBeagle 12B MoE by allknowingroger

 ยป  All LLMs  ยป  allknowingroger  ยป  CeptrixBeagle 12B MoE   URL Share it on

Allknowingroger/neuralceptrix-...   Autotrain compatible Base model:allknowingroger/neu... Base model:paulml/omnibeaglesq...   Endpoints compatible   Frankenmoe   Lazymergekit   License:apache-2.0   Merge   Mergekit   Mixtral   Moe Paulml/omnibeaglesquaredmbx-v3...   Region:us   Safetensors   Sharded   Tensorflow

CeptrixBeagle 12B MoE Benchmarks

Rank the CeptrixBeagle 12B MoE Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
CeptrixBeagle 12B MoE (allknowingroger/CeptrixBeagle-12B-MoE)

Best Alternatives to CeptrixBeagle 12B MoE

Best Alternatives
HF Rank
Multimerge 12B MoE76.6832K / 25.8 GB12810
Calmex26merge 12B MoE76.632K / 25.8 GB13470
Neuraljack 12B MoE76.2432K / 25.8 GB13440
...ltimerge Neurallaymons 12B MoE76.2132K / 25.8 GB13550
WestLakeMultiverse 12B MoE76.1232K / 25.8 GB13530
WestLakeLaser 12B MoE75.8932K / 25.8 GB13580
JupiterINEX12 12B MoE75.7732K / 25.8 GB12670
Multimaster 7B V675.6632K / 142.5 GB21671
Lumina 475.5932K / 37.1 GB10160
Calme 4x7B MoE V0.175.5332K / 48.3 GB16592
Note: green Score (e.g. "73.2") means that the model is better than allknowingroger/CeptrixBeagle-12B-MoE.

CeptrixBeagle 12B MoE Parameters and Internals

LLM NameCeptrixBeagle 12B MoE
RepositoryOpen on ๐Ÿค— 
Base Model(s)  NeuralCeptrix 7B Slerp   OmniBeagleSquaredMBX V3 7B   allknowingroger/NeuralCeptrix-7B-slerp   paulml/OmniBeagleSquaredMBX-v3-7B
Model Size7b
Required VRAM25.8 GB
Model Typemixtral
Model Files  1.9 GB: 1-of-13   2.0 GB: 2-of-13   2.0 GB: 3-of-13   2.0 GB: 4-of-13   2.0 GB: 5-of-13   2.0 GB: 6-of-13   2.0 GB: 7-of-13   2.0 GB: 8-of-13   2.0 GB: 9-of-13   2.0 GB: 10-of-13   2.0 GB: 11-of-13   2.0 GB: 12-of-13   1.9 GB: 13-of-13
Model ArchitectureMixtralForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.39.3
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 34902 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801