Mixsmol 4x400M V0.1 Epoch3 by vilm

 ยป  All LLMs  ยป  vilm  ยป  Mixsmol 4x400M V0.1 Epoch3   URL Share it on

  Autotrain compatible   Endpoints compatible   Mixtral   Moe   Region:us   Safetensors

Mixsmol 4x400M V0.1 Epoch3 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mixsmol 4x400M V0.1 Epoch3 (vilm/Mixsmol-4x400M-v0.1-epoch3)

Mixsmol 4x400M V0.1 Epoch3 Parameters and Internals

Model Type 
multimodal, crosslingual
Additional Notes 
Note that this is an experimental model run focusing on data mixing.
Training Details 
Data Sources:
Synthetic Textbooks, RefinedWeb, RedPajama-v2, MathPile, ThePile, GoodWiki, The Stack Smol XL, The Vault: train_small split, Instruction Pretraining
Data Volume:
50B tokens
Methodology:
Experimental in data mixing focusing on reasoning capabilities through synthetic textbook data and crosslingual understanding through machine translation/multilingual tasks pretraining
Release Notes 
Version:
Epoch 3
Notes:
This version was trained on 50B tokens to test reasoning capabilities and crosslingual understanding. Future runs will use more data and compute to maximize capabilities.
LLM NameMixsmol 4x400M V0.1 Epoch3
Repository ๐Ÿค—https://huggingface.co/vilm/Mixsmol-4x400M-v0.1-epoch3 
Model Size1.8b
Required VRAM3.5 GB
Updated2025-02-05
Maintainervilm
Model Typemixtral
Model Files  3.5 GB   0.0 GB
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.37.1
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size38365
Torch Data Typebfloat16

Best Alternatives to Mixsmol 4x400M V0.1 Epoch3

Best Alternatives
Context / RAM
Downloads
Likes
Mixsmol 4x400M V0.1 Epoch14K / 3.5 GB14912
Mixsmol 4x400M V0.1 Epoch24K / 3.5 GB1595
Mixtral 4x400M4K / 7.1 GB572
Tiny Llama 1.8B2K / 3.7 GB00
Note: green Score (e.g. "73.2") means that the model is better than vilm/Mixsmol-4x400M-v0.1-epoch3.

Rank the Mixsmol 4x400M V0.1 Epoch3 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227