Pearl 3x7B by louisbrulenaudet

 ยป  All LLMs  ยป  louisbrulenaudet  ยป  Pearl 3x7B   URL Share it on

  Autotrain compatible Base model:beowolx/codeninja-1... Base model:dvilasuero/distilab... Base model:merge:beowolx/coden... Base model:merge:dvilasuero/di... Base model:merge:wizardlmteam/... Base model:wizardlmteam/wizard... Beowolx/codeninja-1.0-openchat...   Code   Conversational Dvilasuero/distilabelbeagle14-...   En   Endpoints compatible   Frankenmoe   Lazymergekit   Maths   Merge   Mergekit   Mixtral   Moe   Python   Region:us   Safetensors   Sharded   Tensorflow   Wizardlm/wizardmath-7b-v1.1

Pearl 3x7B Benchmarks

Pearl 3x7B (louisbrulenaudet/Pearl-3x7B)

Pearl 3x7B Parameters and Internals

Model Type 
text-generation, Mixture of Experts (MoE)
Additional Notes 
Specializes in chat, code, and mathematics tasks using a Mixture of Experts model.
Supported Languages 
English (fluent)
Training Details 
Data Sources:
dvilasuero/DistilabelBeagle14-7B, beowolx/CodeNinja-1.0-OpenChat-7B, WizardLM/WizardMath-7B-V1.1
Methodology:
Mixture of Experts to combine chat, code, and mathematics models
Model Architecture:
Mixture of Experts (MoE)
Input Output 
Input Format:
Dictionary with 'role' and 'content'.
Accepted Modalities:
text
Output Format:
Generated text
LLM NamePearl 3x7B
Repository ๐Ÿค—https://huggingface.co/louisbrulenaudet/Pearl-3x7B 
Base Model(s)  DistilabelBeagle14 7B   beowolx/CodeNinja-1.0-OpenChat-7B   WizardLM/WizardMath-7B-V1.1   dvilasuero/DistilabelBeagle14-7B   beowolx/CodeNinja-1.0-OpenChat-7B   WizardLM/WizardMath-7B-V1.1
Model Size18.5b
Required VRAM37.1 GB
Updated2025-02-19
Maintainerlouisbrulenaudet
Model Typemixtral
Model Files  1.9 GB: 1-of-19   2.0 GB: 2-of-19   2.0 GB: 3-of-19   2.0 GB: 4-of-19   2.0 GB: 5-of-19   2.0 GB: 6-of-19   2.0 GB: 7-of-19   2.0 GB: 8-of-19   2.0 GB: 9-of-19   2.0 GB: 10-of-19   2.0 GB: 11-of-19   2.0 GB: 12-of-19   2.0 GB: 13-of-19   2.0 GB: 14-of-19   2.0 GB: 15-of-19   2.0 GB: 16-of-19   2.0 GB: 17-of-19   2.0 GB: 18-of-19   1.2 GB: 19-of-19
Supported Languagesen
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.2
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size32002
Torch Data Typefloat16

Best Alternatives to Pearl 3x7B

Best Alternatives
Context / RAM
Downloads
Likes
EastAsia 4x7B MoE Experiment32K / 37.1 GB16571
Lumina 3.532K / 37.1 GB27630
Topxtral 4x7B V0.132K / 37.1 GB38864
Hyperion 3.0 Mixtral 3x7B32K / 37.1 GB374
Blitz AI MoE V0.732K / 37.1 GB271
Blitz AI MoE V0.432K / 37.1 GB281
HeroBophades 3x7B32K / 37.1 GB72
Wizard Kun Lake 3x7B MoE32K / 37.1 GB91
NaruMOE 3x7B V232K / 37.1 GB90
MoE 3x7b QA Code Inst32K / 37 GB384
Note: green Score (e.g. "73.2") means that the model is better than louisbrulenaudet/Pearl-3x7B.

Rank the Pearl 3x7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43318 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227