LLM Name | DMoE 8B Finetune All V6 Epoch2 V0.1 |
Repository ๐ค | https://huggingface.co/geniacllm/dMoE_8B_finetune_all_v6_epoch2_v0.1 |
Model Size | 8b |
Required VRAM | 18 GB |
Updated | 2025-02-05 |
Maintainer | geniacllm |
Model Type | mixtral |
Model Files | |
Supported Languages | ja en |
Model Architecture | MixtralForCausalLM |
License | apache-2.0 |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.40.1 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 56320 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Lamma3merge3 15B MoE | 8K / 27.5 GB | 11 | 1 |
Lamma3merge2 15B MoE | 8K / 27.5 GB | 10 | 0 |
Llama3merge7 15B MoE | 8K / 27.5 GB | 7 | 0 |
Mergkit 1 | 8K / 22.6 GB | 5 | 0 |
Llama 3 8B Shisa 2x8B | 8K / 7.4 GB | 8 | 2 |
Llama3merge8 15B MoE | 8K / 27.5 GB | 5 | 0 |
Llama3merge6 15B MoE | 8K / 27.5 GB | 5 | 0 |
...oE 8B Pretrain 0520 Iter134999 | 2K / 18 GB | 15 | 0 |
...Storm V1.15 4x8B B 8 0bpw EXL2 | 8K / 25.2 GB | 5 | 0 |
... SnowStorm 4x8B 6.5bpw H8 EXL2 | 8K / 21 GB | 4 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐