Llama3 4x8b PythonT2 Step Final by kalo-team

 ยป  All LLMs  ยป  kalo-team  ยป  Llama3 4x8b PythonT2 Step Final   URL Share it on

  Arxiv:2303.01610   Autotrain compatible   Code   Conversational   En   Endpoints compatible   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

Llama3 4x8b PythonT2 Step Final Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama3 4x8b PythonT2 Step Final (kalo-team/llama3-4x8b-pythonT2_step_final)

Llama3 4x8b PythonT2 Step Final Parameters and Internals

Model Type 
distilled, sparse MoE
Use Cases 
Areas:
research, experimentation
Limitations:
limited data can lead to catastrophic forgetting
Additional Notes 
Initial evaluation shows mild catastrophic forgetting due to data limitation.
Supported Languages 
language (en), proficiency (high)
Training Details 
Data Sources:
Python instruct data
Data Volume:
~2.5 million tokens
Methodology:
KL divergence distillation, Sparse MoE, Mixtral-style
Context Length:
8000
Model Architecture:
Sparse MoE with duplicated 8b expert MLP layers from Llama3 8b
LLM NameLlama3 4x8b PythonT2 Step Final
Repository ๐Ÿค—https://huggingface.co/kalo-team/llama3-4x8b-pythonT2_step_final 
Model Size24.9b
Required VRAM50.1 GB
Updated2025-02-22
Maintainerkalo-team
Model Typemixtral
Model Files  4.9 GB: 1-of-11   5.0 GB: 2-of-11   4.9 GB: 3-of-11   5.0 GB: 4-of-11   5.0 GB: 5-of-11   4.9 GB: 6-of-11   5.0 GB: 7-of-11   5.0 GB: 8-of-11   4.9 GB: 9-of-11   4.4 GB: 10-of-11   1.1 GB: 11-of-11
Supported Languagesen
Model ArchitectureMixtralForCausalLM
Licensellama3
Context Length8192
Model Max Length8192
Transformers Version4.38.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|begin_of_text|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama3 4x8b PythonT2 Step Final

Best Alternatives
Context / RAM
Downloads
Likes
L3.1 MoE 4x8B V0.1128K / 50.1 GB423
L3.1 ClaudeMaid 4x8B128K / 50.1 GB457
L3.1 MoE 4x8B V0.2128K / 50.1 GB172
Llama Salad 4x8B V38K / 50.1 GB45
...oE 4x8B Dark Planet Rising 25B8K / 50.1 GB220
...x8B Dark Planet Rebel FURY 25B8K / 50.1 GB190
L3 MoE 4X8B Grand Horror 25B8K / 50.1 GB160
OpenCrystal V4 L3 4x8B8K / 50 GB132
L3 SnowStorm V1.15 4x8B B8K / 49.9 GB5811
L3 SnowStorm V1.15 4x8B A8K / 49.9 GB541
Note: green Score (e.g. "73.2") means that the model is better than kalo-team/llama3-4x8b-pythonT2_step_final.

Rank the Llama3 4x8b PythonT2 Step Final Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227