LLM Name | MISTRAL 1.58 BIT PRETRAIN V2 |
Repository ๐ค | https://huggingface.co/liminerity/MISTRAL-1.58-BIT-PRETRAIN-v2 |
Model Size | 1.1b |
Required VRAM | 4.3 GB |
Updated | 2025-02-22 |
Maintainer | liminerity |
Model Type | mistral |
Model Files | |
Model Architecture | MistralForCausalLM |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.41.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | </s> |
Vocabulary Size | 32000 |
Torch Data Type | float32 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Hare 1.1B Base 0.5v | 32K / 2.4 GB | 47 | 0 |
TinyLlama Mistral | 32K / 4.4 GB | 11 | 2 |
Mistral 1.1B Testing | 32K / 4.4 GB | 1945 | 1 |
Hare 1.1B Base | 32K / 2.2 GB | 81 | 7 |
Dipsmol | 32K / 2.2 GB | 61 | 0 |
Hare 1.1B Chat | 32K / 2.2 GB | 37 | 0 |
Hare 1.1B Tool | 32K / 2.2 GB | 10 | 1 |
Stealth Rag V1.1 | 32K / 14.4 GB | 9 | 0 |
Mallam 1.1B 4096 | 32K / 2.2 GB | 1512 | 5 |
Stealth Rag V1 E1 | 32K / 14.4 GB | 12 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐