LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Mistral Ft Optimized 1218 Mistral 7B Instruct V0.1 GGUF by MaziyarPanahi

What open-source LLMs or SLMs are you in search of? 18732 in total.

  2-bit   3-bit   4-bit   5-bit   6-bit   7b   8-bit   Autotrain compatible Base model:maziyarpanahi/mistr...   En   Endpoints compatible   Gguf   Instruct   License:apache-2.0   License:cc-by-nc-4.0   Merge   Mistral Mistralai/mistral-7b-instruct-... Openpipe/mistral-ft-optimized-...   Quantized   Region:us   Safetensors

Rank the Mistral Ft Optimized 1218 Mistral 7B Instruct V0.1 GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Mistral Ft Optimized 1218 Mistral 7B Instruct V0.1 GGUF (MaziyarPanahi/mistral-ft-optimized-1218-Mistral-7B-Instruct-v0.1-GGUF)

Best Alternatives to Mistral Ft Optimized 1218 Mistral 7B Instruct V0.1 GGUF

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
CodeNinja 1.0 OpenChat 7B GGUF59.10K / 3.1 GB531
... 2.6 Mistral 7B DPO Laser GGUF590K /  GB2171
... 2.6 Mistral 7B DPO Laser GGUF590K / 3.1 GB8729
...olphin 2.6 Mistral 7B DPO GGUF58.90K / 3.1 GB1819
Mistral 7B Instruct V0.2 GGUF57.60K / 2.7 GB2301
Mistral 7B Instruct V0.2 GGUF57.60K / 3.1 GB2611186
Mistral 7B Instruct V0.2 GGUF57.60K / 4.4 GB90
Dolphin 2.6 Mistral 7B GGUF56.90K /  GB2060
Dolphin 2.6 Mistral 7B GGUF56.90K / 3.1 GB8351
Openinstruct Mistral 7B GGUF55.80K / 3.1 GB34
Note: green Score (e.g. "73.2") means that the model is better than MaziyarPanahi/mistral-ft-optimized-1218-Mistral-7B-Instruct-v0.1-GGUF.

Mistral Ft Optimized 1218 Mistral 7B Instruct V0.1 GGUF Parameters and Internals

LLM NameMistral Ft Optimized 1218 Mistral 7B Instruct V0.1 GGUF
RepositoryOpen on ๐Ÿค— 
Model Namemistral-ft-optimized-1218-Mistral-7B-Instruct-v0.1-GGUF
Model CreatorMaziyarPanahi
Base Model(s)  ... 1218 Mistral 7B Instruct V0.1   MaziyarPanahi/mistral-ft-optimized-1218-Mistral-7B-Instruct-v0.1
Model Size7b
Required VRAM2.7 GB
Updated2024-02-21
MaintainerMaziyarPanahi
Model Typemistral
Instruction-BasedYes
Model Files  2.7 GB   3.8 GB   3.5 GB   3.2 GB   4.4 GB   4.1 GB   5.1 GB   5.0 GB   5.9 GB   7.7 GB
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licenseapache-2.0
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003