LLM Name | Llama 3 Aplite Instruct 4x8B MoE |
Repository | Open on ๐ค |
Base Model(s) | |
Model Size | 24.9b |
Required VRAM | 50 GB |
Updated | 2024-07-27 |
Maintainer | raincandy-u |
Model Type | mixtral |
Instruction-Based | Yes |
Model Files | |
Supported Languages | en |
Model Architecture | MixtralForCausalLM |
License | other |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.39.3 |
Tokenizer Class | PreTrainedTokenizerFast |
Padding Token | <|begin_of_text|> |
Vocabulary Size | 128256 |
Torch Data Type | float16 |
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Llama 3 4x8B | 0.2 | 8K / 49.9 GB | 235 | 0 |
Llama 3 8B Instruct MoE 2 | 0.2 | 8K / 50 GB | 6 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐