Model Type |
| ||||||||||||
Additional Notes |
| ||||||||||||
Training Details |
| ||||||||||||
Input Output |
|
LLM Name | Dart V2 Base |
Repository ๐ค | https://huggingface.co/p1atdev/dart-v2-base |
Model Size | 100m |
Required VRAM | 0.2 GB |
Updated | 2025-02-22 |
Maintainer | p1atdev |
Model Type | mistral |
Model Files | |
Model Architecture | MistralForCausalLM |
License | apache-2.0 |
Context Length | 1024 |
Model Max Length | 1024 |
Transformers Version | 4.38.2 |
Tokenizer Class | PreTrainedTokenizerFast |
Padding Token | <|pad|> |
Vocabulary Size | 30649 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Endurance 100B V1 | 128K / 200.1 GB | 13 | 13 |
Endurance 100B V1.1 | 128K / 200.2 GB | 27 | 3 |
Lazarus 2407 100B | 128K / 200.2 GB | 8 | 8 |
Mistral 100M Textbooks | 32K / 0.5 GB | 1981 | 3 |
Dart V2 Sft | 1K / 0.2 GB | 961 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐