LLM Name | Fireball Mistral Nemo Base 2407 Sft V2 F16 Gguf |
Repository ๐ค | https://huggingface.co/EpistemeAI/Fireball-Mistral-Nemo-Base-2407-sft-v2-f16-gguf |
Base Model(s) | |
Model Size | 12.2b |
Required VRAM | 24.5 GB |
Updated | 2024-09-18 |
Maintainer | EpistemeAI |
Model Type | mistral |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | fp16|gguf |
Model Architecture | AutoModel |
License | apache-2.0 |
Model Max Length | 1024000 |
Tokenizer Class | PreTrainedTokenizerFast |
Padding Token | <pad> |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐