Model Type |
| |||
Additional Notes |
| |||
Supported Languages |
| |||
Input Output |
|
LLM Name | Mistral Ft Optimized 1218 GPTQ |
Repository ๐ค | https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GPTQ |
Model Name | Mistral FT Optimized 1218 |
Model Creator | OpenPipe |
Base Model(s) | |
Model Size | 1.2b |
Required VRAM | 4.2 GB |
Updated | 2025-03-14 |
Maintainer | TheBloke |
Model Type | mistral |
Model Files | |
Supported Languages | en |
GPTQ Quantization | Yes |
Quantization Type | gptq |
Model Architecture | MistralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.35.2 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 32000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
... Finetune 16bit Ver9 Main GPTQ | 32K / 4.2 GB | 12 | 0 |
Dictalm2.0 Instruct GPTQ | 32K / 4.2 GB | 93 | 0 |
Dictalm2.0 GPTQ | 32K / 4.2 GB | 54 | 0 |
Multi Verse Model GPTQ | 32K / 4.2 GB | 32 | 1 |
Turdus GPTQ | 32K / 4.2 GB | 76 | 5 |
Garrulus GPTQ | 32K / 4.2 GB | 22 | 3 |
HamSter 0.1 GPTQ | 32K / 4.2 GB | 27 | 2 |
Phoenix GPTQ | 32K / 4.2 GB | 22 | 1 |
Mistral Ft Optimized 1227 GPTQ | 32K / 4.2 GB | 35 | 2 |
Metis 0.5 GPTQ | 32K / 4.2 GB | 24 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐