Inixion 2x8B V2 by Alsebay

 ยป  All LLMs  ยป  Alsebay  ยป  Inixion 2x8B V2   URL Share it on

  Autotrain compatible   Conversational   Endpoints compatible   Instruct   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/Alsebay/Inixion-2x8B-v2 

Inixion 2x8B V2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Inixion 2x8B V2 (Alsebay/Inixion-2x8B-v2)

Inixion 2x8B V2 Parameters and Internals

Model Type 
general task, RP
Use Cases 
Primary Use Cases:
RP, general task
Limitations:
8B model base, lacks some knowledge and can cause trouble, Unexpected bug occurrences, Big model, requires at least 8GB VRAM, better with 12GB VRAM for GGUF version
Additional Notes 
Could use RoPe scale to 16k context length but may be unstable. Not too violent and toxic.
Training Details 
Data Sources:
MaziyarPanahi/Llama-3-8B-Instruct-v0.9, NeverSleep/Llama-3-Lumimaid-8B-v0.1, Sao10K/L3-8B-Stheno-v3.2
Methodology:
MoE merge
Context Length:
16000
LLM NameInixion 2x8B V2
Repository ๐Ÿค—https://huggingface.co/Alsebay/Inixion-2x8B-v2 
Model Size13.7b
Required VRAM27.4 GB
Updated2024-12-03
MaintainerAlsebay
Model Typemixtral
Instruction-BasedYes
Model Files  4.0 GB: 1-of-7   4.0 GB: 2-of-7   4.0 GB: 3-of-7   4.0 GB: 4-of-7   4.0 GB: 5-of-7   4.0 GB: 6-of-7   3.4 GB: 7-of-7
Model ArchitectureMixtralForCausalLM
Licensecc-by-nc-4.0
Context Length8192
Model Max Length8192
Transformers Version4.41.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|begin_of_text|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Inixion 2x8B V2

Best Alternatives
Context / RAM
Downloads
Likes
L3.1 Celestial Stone 2x8B128K / 27.3 GB3123
...ma 3 2x8B Instruct MoE 64K Ctx64K / 27.3 GB124
Defne Llama3 2x8B8K / 27.4 GB93335
MoE Llama3 8bx2 Rag8K / 27.3 GB50
Inixion 2x8B8K / 27.5 GB91
Llama 3 Chatty 2x8B8K / 27.3 GB711
FinalFintetuning XVIII 2x8B8K / 27.5 GB83
Llama 3 Teal Instruct 2x8B MoE8K / 27.3 GB71
...lama3 2x8b MoE 41K Experiment18K / 27.3 GB92
Llama 3 8Bx2 MoE DPO8K / 27.4 GB71
Note: green Score (e.g. "73.2") means that the model is better than Alsebay/Inixion-2x8B-v2.

Rank the Inixion 2x8B V2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227