Jetmoe 8B Sft by jetmoe

 ยป  All LLMs  ยป  jetmoe  ยป  Jetmoe 8B Sft   URL Share it on

  Arxiv:2404.07413   Alignment-handbook   Autotrain compatible Base model:finetune:jetmoe/jet...   Base model:jetmoe/jetmoe-8b   Conversational Dataset:huggingfaceh4/airoboro...   Dataset:huggingfaceh4/capybara Dataset:huggingfaceh4/code-fee... Dataset:huggingfaceh4/orca-mat... Dataset:huggingfaceh4/systemch... Dataset:huggingfaceh4/ultracha...   Endpoints compatible   Generated from trainer   Jetmoe   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/jetmoe/jetmoe-8b-sft 

Jetmoe 8B Sft Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Jetmoe 8B Sft (jetmoe/jetmoe-8b-sft)

Jetmoe 8B Sft Parameters and Internals

Model Type 
Causal Language Model
Training Details 
Data Sources:
HuggingFaceH4/ultrachat_200k, HuggingFaceH4/airoboros-3.2, HuggingFaceH4/Code-Feedback, HuggingFaceH4/orca-math-word-problems-200k, HuggingFaceH4/SystemChat, HuggingFaceH4/capybara
Data Volume:
1.25 trillion tokens
Methodology:
Two-phases training method from MiniCPM. Phase 1 uses a constant learning rate with linear warmup and is trained on 1 trillion tokens from large-scale open-source pretraining datasets, including RefinedWeb, Pile, Github data, etc. Phase 2 uses exponential learning rate decay and is trained on 250 billion tokens from phase 1 datasets and extra high-quality open-source datasets.
Hardware Used:
96ร—H100 GPU cluster
Model Architecture:
24 blocks with two MoE layers per block: Mixture of Attention heads (MoA) and Mixture of MLP Experts (MoE). Each MoA and MoE layer has 8 experts with 2 activated per input token. 8 billion total parameters and 2.2B active during inference.
LLM NameJetmoe 8B Sft
Repository ๐Ÿค—https://huggingface.co/jetmoe/jetmoe-8b-sft 
Base Model(s)  Jetmoe 8B   jetmoe/jetmoe-8b
Model Size8b
Required VRAM17 GB
Updated2025-02-22
Maintainerjetmoe
Model Typejetmoe
Model Files  4.9 GB: 1-of-4   4.9 GB: 2-of-4   4.9 GB: 3-of-4   2.3 GB: 4-of-4
Model ArchitectureJetMoEForCausalLM
Licenseapache-2.0
Model Max Length4096
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Activation Functionsilu

Best Alternatives to Jetmoe 8B Sft

Best Alternatives
Context / RAM
Downloads
Likes
Jetmoe 8B0K / 17 GB3227245
Jetmoe 8B Chat0K / 17 GB9628
Note: green Score (e.g. "73.2") means that the model is better than jetmoe/jetmoe-8b-sft.

Rank the Jetmoe 8B Sft Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227