LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Deepseek Coder 6.7 Evol Feedback 4bit by epinnock

What open-source LLMs or SLMs are you in search of? 18870 in total.

  4bit   8-bit   Autotrain compatible Base model:deepseek-ai/deepsee...   Codegen   Conversational   En   Endpoints compatible   Instruct   License:apache-2.0   Llama   Quantized   Region:us   Safetensors   Trl   Unsloth

Deepseek Coder 6.7 Evol Feedback 4bit Benchmarks

Rank the Deepseek Coder 6.7 Evol Feedback 4bit Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Deepseek Coder 6.7 Evol Feedback 4bit (epinnock/deepseek-coder-6.7-evol-feedback-4bit)

Best Alternatives to Deepseek Coder 6.7 Evol Feedback 4bit

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
...oder 6.7B Instruct Hf 4bit Mlx16K / 4 GB50
...rpreter DS 6.7B 6.0bpw H6 EXL216K / 5.2 GB01
...rpreter DS 6.7B 8.0bpw H8 EXL216K / 6.9 GB01
Test16K / 0.3 GB270
...pseek Coder 6.7B Instruct GPTQ16K / 3.9 GB150522
...epseek Coder 6.7B Instruct AWQ16K / 3.9 GB94911
OpenCodeInterpreter DS 6.7B16K / 13.5 GB072
NaturalSQL 6.7B V016K / 13.5 GB1035
Speechless Coder Ds 6.7B16K / 13.5 GB19914
Code 290K 6.7B Instruct16K / 13.5 GB03

Deepseek Coder 6.7 Evol Feedback 4bit Parameters and Internals

LLM NameDeepseek Coder 6.7 Evol Feedback 4bit
RepositoryOpen on 🤗 
Base Model(s)  Deepseek Coder 6.7B Instruct   deepseek-ai/deepseek-coder-6.7b-instruct
Model Size6.7b
Required VRAM3.9 GB
Updated2024-02-29
Maintainerepinnock
Model Typellama
Instruction-BasedYes
Model Files  3.9 GB
Supported Languagesen
Quantization Type4bit
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length16384
Model Max Length16384
Transformers Version4.38.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token<|end▁of▁sentence|>
Vocabulary Size32256
Initializer Range0.02
Torch Data Typebfloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003