Llama3merge7 15B MoE by allknowingroger

 ยป  All LLMs  ยป  allknowingroger  ยป  Llama3merge7 15B MoE   URL Share it on

  Autotrain compatible Base model:cognitivecomputatio... Base model:kukedlc/neuralllami... Base model:merge:cognitivecomp... Base model:merge:kukedlc/neura... Cognitivecomputations/dolphin-...   Conversational   Endpoints compatible   Frankenmoe Kukedlc/neuralllamita-3-8b-v0....   Lazymergekit   Merge   Mergekit   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

Llama3merge7 15B MoE Benchmarks

Llama3merge7 15B MoE (allknowingroger/Llama3merge7-15B-MoE)

Llama3merge7 15B MoE Parameters and Internals

Model Type 
MoE
Additional Notes 
Model is a Mixture of Experts (MoE) model created using LazyMergekit.
LLM NameLlama3merge7 15B MoE
Repository ๐Ÿค—https://huggingface.co/allknowingroger/Llama3merge7-15B-MoE 
Base Model(s)  Kukedlc/NeuralLlamita-3-8B-v0.2   cognitivecomputations/dolphin-2.9-llama3-8b   Kukedlc/NeuralLlamita-3-8B-v0.2   cognitivecomputations/dolphin-2.9-llama3-8b
Model Size8b
Required VRAM27.5 GB
Updated2024-12-03
Maintainerallknowingroger
Model Typemixtral
Model Files  1.1 GB: 1-of-15   2.0 GB: 2-of-15   2.0 GB: 3-of-15   2.0 GB: 4-of-15   2.0 GB: 5-of-15   2.0 GB: 6-of-15   2.0 GB: 7-of-15   2.0 GB: 8-of-15   2.0 GB: 9-of-15   2.0 GB: 10-of-15   2.0 GB: 11-of-15   2.0 GB: 12-of-15   2.0 GB: 13-of-15   2.0 GB: 14-of-15   0.4 GB: 15-of-15
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.40.0
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|begin_of_text|>
Vocabulary Size128258
Torch Data Typebfloat16

Best Alternatives to Llama3merge7 15B MoE

Best Alternatives
Context / RAM
Downloads
Likes
Lamma3merge3 15B MoE8K / 27.5 GB111
Lamma3merge2 15B MoE8K / 27.5 GB100
Mergkit 18K / 22.6 GB70
Llama 3 8B Shisa 2x8B8K / 7.4 GB102
Llama3merge8 15B MoE8K / 27.5 GB60
Llama3merge6 15B MoE8K / 27.5 GB60
...8B Finetune All V6 Epoch2 V0.12K / 18 GB91
...oE 8B Pretrain 0520 Iter1349992K / 18 GB150
...Storm V1.15 4x8B B 8 0bpw EXL28K / 25.2 GB60
... SnowStorm 4x8B 6.5bpw H8 EXL28K / 21 GB72
Note: green Score (e.g. "73.2") means that the model is better than allknowingroger/Llama3merge7-15B-MoE.

Rank the Llama3merge7 15B MoE Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42935 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227