Model Type |
| |||
Input Output |
|
LLM Name | MistralLite 7B GGUF |
Repository ๐ค | https://huggingface.co/second-state/MistralLite-7B-GGUF |
Model Name | MistralLite 7B |
Model Creator | Amazon Web Services |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 2.7 GB |
Updated | 2025-02-05 |
Maintainer | second-state |
Model Type | mistral |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | gguf|q2|q4_k|q5_k |
Model Architecture | MistralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.34.0 |
Vocabulary Size | 32003 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
MegaBeam Mistral 7B 300K Gguf | 282K / 5 GB | 7 | 3 |
Moxin Llm 7B | 32K / 0.9 GB | 1617 | 12 |
Boptruth Agatha 7B | 32K / 14.4 GB | 137 | 0 |
Mistral 7B Instruct V0.2.gguf | 32K / 14.5 GB | 3 | 3 |
Mistral 7B Instruct V0.2 GGUF | 32K / 4.4 GB | 12 | 2 |
Moxin Chat 7B | 32K / 0.9 GB | 37 | 29 |
Mahou 1.2a Mistral 7B | 32K / 14.4 GB | 33 | 6 |
...andle Mistral 7B Instruct V0.2 | 32K / 14.4 GB | 14 | 0 |
BioMistralMerged | 32K / 14.4 GB | 4185 | 0 |
Tamil Mistral 7B Instruct V0.1 | 32K / 14.8 GB | 246 | 14 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐