Beyonder 4x7B V3 Random Lora by Aratako

 ยป  All LLMs  ยป  Aratako  ยป  Beyonder 4x7B V3 Random Lora   URL Share it on

  Autotrain compatible Base model:beowolx/codeninja-1... Base model:mlabonne/alphamonar... Base model:mlabonne/neuraldare... Base model:sanjiwatsuki/kunoic...   Conversational   Endpoints compatible   License:cc-by-nc-4.0   Lora   Merge   Mergekit   Mixtral   Model-index   Moe   Region:us   Safetensors   Sharded   Tensorflow

Beyonder 4x7B V3 Random Lora Benchmarks

nn.n% — How the model compares to the GPT-4.

Rank the Beyonder 4x7B V3 Random Lora Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Beyonder 4x7B V3 Random Lora (Aratako/Beyonder-4x7B-v3-random-lora)

Best Alternatives to Beyonder 4x7B V3 Random Lora

Best Alternatives
HF Rank
CognitiveFusion2 4x7B BF1676.8632K / 48.3 GB10172
MixtureofMerges MoE 4x7b V476.2332K / 48.3 GB24442
MixtureofMerges MoE 4x7b V576.0232K / 48.3 GB24411
NeuralMona MoE 4x7B75.9932K / 48.3 GB20660
Mera Mix 4x7B75.9132K / 48.3 GB38413
Beyonder 4x7B V375.6532K / 48.3 GB414954
Calme 4x7B MoE V0.175.5332K / 48.3 GB12212
Calme 4x7B MoE V0.275.4232K / 48.3 GB10841
SuperFlammen 4x7B75.3632K / 48.3 GB7990
MixtureofMerges MoE 4x7b V375.3132K / 48.3 GB24260
Note: green Score (e.g. "73.2") means that the model is better than Aratako/Beyonder-4x7B-v3-random-lora.

Beyonder 4x7B V3 Random Lora Parameters and Internals

LLM NameBeyonder 4x7B V3 Random Lora
RepositoryOpen on ๐Ÿค— 
Base Model(s)  AlphaMonarch 7B   CodeNinja 1.0 OpenChat 7B   Kunoichi DPO V2 7B   NeuralDaredevil 7B   mlabonne/AlphaMonarch-7B   beowolx/CodeNinja-1.0-OpenChat-7B   SanjiWatsuki/Kunoichi-DPO-v2-7B   mlabonne/NeuralDaredevil-7B
Model Size24.2b
Required VRAM48.5 GB
Model Typemixtral
Model Files  4.9 GB: 1-of-10   5.0 GB: 2-of-10   5.0 GB: 3-of-10   4.9 GB: 4-of-10   5.0 GB: 5-of-10   5.0 GB: 6-of-10   5.0 GB: 7-of-10   5.0 GB: 8-of-10   5.0 GB: 9-of-10   3.7 GB: 10-of-10
Model ArchitectureMixtralForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token[PAD]
Vocabulary Size32001
LoRA ModelYes
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 35526 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20240042001