LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Pearl 3x7B by louisbrulenaudet

What open-source LLMs or SLMs are you in search of? 18857 in total.

 ยป  All LLMs  ยป  louisbrulenaudet  ยป  Pearl 3x7B   URL Share it on

  Autotrain compatible Base model:beowolx/codeninja-1... Base model:dvilasuero/distilab... Base model:wizardlm/wizardmath... Beowolx/codeninja-1.0-openchat...   Code   Conversational Dvilasuero/distilabelbeagle14-...   En   Endpoints compatible   Frankenmoe   Lazymergekit   License:apache-2.0   Maths   Merge   Mergekit   Mixtral   Moe   Python   Region:us   Safetensors   Sharded   Tensorflow   Wizardlm/wizardmath-7b-v1.1

Pearl 3x7B Benchmarks

Rank the Pearl 3x7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Pearl 3x7B (louisbrulenaudet/Pearl-3x7B)

Best Alternatives to Pearl 3x7B

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Solutus 3x7B74.3532K / 37.1 GB13027
SuperBruphin 3x7B73.7532K / 37.1 GB11020
CultriX MoE BF1672.632K / 37.1 GB22550
MoE 3x7b QA Code Inst66.732K / 37 GB3173
...oE 4x7b Mistral Llava Instruct32K / 37 GB280
Topxtral 4x7B V0.132K / 37.1 GB33
Franken MoE 18B V0.132K / 37.1 GB82
EastAsia 4x7B MoE Experiment32K / 37.1 GB24660
Note: green Score (e.g. "73.2") means that the model is better than louisbrulenaudet/Pearl-3x7B.

Pearl 3x7B Parameters and Internals

LLM NamePearl 3x7B
RepositoryOpen on ๐Ÿค— 
Base Model(s)  DistilabelBeagle14 7B   CodeNinja 1.0 OpenChat 7B   WizardMath 7B V1.1   dvilasuero/DistilabelBeagle14-7B   beowolx/CodeNinja-1.0-OpenChat-7B   WizardLM/WizardMath-7B-V1.1
Model Size18.5b
Required VRAM37.1 GB
Updated2024-02-28
Maintainerlouisbrulenaudet
Model Typemixtral
Model Files  1.9 GB: 1-of-19   2.0 GB: 2-of-19   2.0 GB: 3-of-19   2.0 GB: 4-of-19   2.0 GB: 5-of-19   2.0 GB: 6-of-19   2.0 GB: 7-of-19   2.0 GB: 8-of-19   2.0 GB: 9-of-19   2.0 GB: 10-of-19   2.0 GB: 11-of-19   2.0 GB: 12-of-19   2.0 GB: 13-of-19   2.0 GB: 14-of-19   2.0 GB: 15-of-19   2.0 GB: 16-of-19   2.0 GB: 17-of-19   2.0 GB: 18-of-19   1.2 GB: 19-of-19
Supported Languagesen
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.2
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size32002
Initializer Range0.02
Torch Data Typefloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003