Dart V2 MoE Sft by p1atdev

 ยป  All LLMs  ยป  p1atdev  ยป  Dart V2 MoE Sft   URL Share it on

  Autotrain compatible Base model:finetune:p1atdev/da... Base model:p1atdev/dart-v2-moe...   Danbooru Dataset:isek-ai/danbooru-tags-...   Mixtral   Moe   Optimum   Region:us   Safetensors   Sft   Trl
Model Card on HF ๐Ÿค—: https://huggingface.co/p1atdev/dart-v2-moe-sft 

Dart V2 MoE Sft Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dart V2 MoE Sft (p1atdev/dart-v2-moe-sft)

Dart V2 MoE Sft Parameters and Internals

Model Type 
Causal language model
Training Details 
Data Sources:
isek-ai/danbooru-tags-2024
Data Volume:
7M size of danbooru tags dataset since 2005 to 2024/03/31
Hardware Used:
8x RTX A6000
Model Architecture:
Mixtral
LLM NameDart V2 MoE Sft
Repository ๐Ÿค—https://huggingface.co/p1atdev/dart-v2-moe-sft 
Base Model(s)  Dart V2 MoE Base   p1atdev/dart-v2-moe-base
Model Size165.7m
Required VRAM0.3 GB
Updated2025-02-22
Maintainerp1atdev
Model Typemixtral
Model Files  0.3 GB
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length1024
Model Max Length1024
Transformers Version4.38.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|pad|>
Vocabulary Size30649
Torch Data Typebfloat16

Best Alternatives to Dart V2 MoE Sft

Best Alternatives
Context / RAM
Downloads
Likes
Dart V2 MoE Base1K / 0.3 GB1601

Rank the Dart V2 MoE Sft Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227