Model Q4 K M by sanjay782

 ยป  All LLMs  ยป  sanjay782  ยป  Model Q4 K M   URL Share it on

  Merged Model   16bit Base model:sanjay782/model mer...   En   Endpoints compatible   Gguf   License:apache-2.0   Llama   Q4   Quantized   Region:us   Unsloth

Rank the Model Q4 K M Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Model Q4 K M (sanjay782/model_q4_k_m)

Best Alternatives to Model Q4 K M

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Mixtral 8x7B V0.1 GGUF68.40K / 15.6 GB16539416
Mixtral 8x7B V0.1 GGUF68.40K / 17.3 GB2000
...ixtral 8x7B Instruct V0.1 GGUF68.20K / 15.6 GB165572581
...ixtral 8x7B Instruct V0.1 GGUF68.20K / 17.3 GB7062
Llama3 Kw Gen2K / 4.9 GB441
...llama Ft En Es Rag Gguf Q4 K M0K / 0 GB4760
All MiniLM L6 V2 Gguf0K / 0 GB2820
Gemma2b Gguf0K / 0 GB460
OWAI3 Test 50K / 0 GB300
Llama Gguf Finetuned0K / 0 GB280
Note: green Score (e.g. "73.2") means that the model is better than sanjay782/model_q4_k_m.

Model Q4 K M Parameters and Internals

LLM NameModel Q4 K M
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Model Merged 16bit   sanjay782/model_merged_16bit
Merged ModelYes
Required VRAM0.7 GB
Updated2024-07-13
Maintainersanjay782
Model Typellama
Model Files  2.2 GB   0.7 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf|q4|16bit
Model ArchitectureAutoModel
Licenseapache-2.0

What open-source LLMs or SLMs are you in search of? 36243 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801