DeepSeek V2 Lite Chat GGUF by second-state

 ยป  All LLMs  ยป  second-state  ยป  DeepSeek V2 Lite Chat GGUF   URL Share it on

  Autotrain compatible Base model:deepseek-ai/deepsee...   Custom code   Deepseek v2   Gguf   License:other   Q2   Quantized   Region:us

Rank the DeepSeek V2 Lite Chat GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
DeepSeek V2 Lite Chat GGUF (second-state/DeepSeek-V2-Lite-Chat-GGUF)

Best Alternatives to DeepSeek V2 Lite Chat GGUF

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
...ek Coder V2 Lite Instruct GGUF160K / 6.4 GB303231
...ek Coder V2 Lite Instruct GGUF160K / 6.4 GB119400
DeepSeek V2 Lite Chat GGUF160K / 6.4 GB90

DeepSeek V2 Lite Chat GGUF Parameters and Internals

LLM NameDeepSeek V2 Lite Chat GGUF
RepositoryOpen on ๐Ÿค— 
Model NameDeepSeek-V2-Lite-Chat
Model CreatorDeepSeek-ai
Base Model(s)  DeepSeek V2 Lite Chat   deepseek-ai/DeepSeek-V2-Lite-Chat
Required VRAM6.4 GB
Updated2024-07-13
Maintainersecond-state
Model Typedeepseek_v2
Model Files  6.4 GB   8.5 GB   8.1 GB   7.5 GB   8.9 GB   10.4 GB   9.5 GB   10.8 GB   11.9 GB   11.1 GB   14.1 GB   16.7 GB   31.4 GB
GGUF QuantizationYes
Quantization Typegguf|q2|q4_k|q5_k
Model ArchitectureDeepseekV2ForCausalLM
Licenseother
Context Length163840
Model Max Length163840
Transformers Version4.33.1
Vocabulary Size102400
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 36243 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801