JupiterINEX12 12B MoE by allknowingroger

 ยป  All LLMs  ยป  allknowingroger  ยป  JupiterINEX12 12B MoE   URL Share it on

Allknowingroger/jupitermerge-7... Allknowingroger/rasgullainex12...   Autotrain compatible Base model:allknowingroger/jup... Base model:allknowingroger/ras... Base model:merge:allknowingrog... Base model:merge:allknowingrog...   Conversational   Endpoints compatible   Frankenmoe   Lazymergekit   Merge   Mergekit   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

JupiterINEX12 12B MoE Benchmarks

JupiterINEX12 12B MoE (allknowingroger/JupiterINEX12-12B-MoE)

JupiterINEX12 12B MoE Parameters and Internals

Model Type 
text generation
Additional Notes 
This model is created using the LazyMergekit for a custom mixture of experts approach.
Input Output 
Input Format:
chat messages
Accepted Modalities:
text
Output Format:
generated text
LLM NameJupiterINEX12 12B MoE
Repository ๐Ÿค—https://huggingface.co/allknowingroger/JupiterINEX12-12B-MoE 
Base Model(s)  JupiterMerge 7B Slerp   RasGullaINEX12 7B Slerp   allknowingroger/JupiterMerge-7B-slerp   allknowingroger/RasGullaINEX12-7B-slerp
Model Size7b
Required VRAM25.8 GB
Updated2025-02-05
Maintainerallknowingroger
Model Typemixtral
Model Files  1.9 GB: 1-of-13   2.0 GB: 2-of-13   2.0 GB: 3-of-13   2.0 GB: 4-of-13   2.0 GB: 5-of-13   2.0 GB: 6-of-13   2.0 GB: 7-of-13   2.0 GB: 8-of-13   2.0 GB: 9-of-13   2.0 GB: 10-of-13   2.0 GB: 11-of-13   2.0 GB: 12-of-13   1.9 GB: 13-of-13
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.39.3
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to JupiterINEX12 12B MoE

Best Alternatives
Context / RAM
Downloads
Likes
Multimaster 7B V632K / 142.5 GB41781
NeuralStar FusionWriter 4x7b32K / 48.3 GB65
Mixtral 7B 8expert32K / 93.6 GB9186264
Laserxtral32K / 48.3 GB423178
Multilingual Mistral32K / 93.5 GB12332
MultiverseBuddy 15B MoE32K / 25.8 GB90
Mini Mixtral V0.232K / 25.8 GB574
OpenMistral MoE32K / 48.3 GB12180
Merged Model MoE32K / 53.3 GB61
Lumina 232K / 37.1 GB50
Note: green Score (e.g. "73.2") means that the model is better than allknowingroger/JupiterINEX12-12B-MoE.

Rank the JupiterINEX12 12B MoE Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227