LLM Name | Minicpm 2b Int4 |
Repository ๐ค | https://huggingface.co/openbmb/minicpm_2b_int4 |
Model Size | 2b |
Required VRAM | 2.4 GB |
Updated | 2024-09-07 |
Maintainer | openbmb |
Model Type | minicpm |
Model Files | |
Supported Languages | en zh |
GPTQ Quantization | Yes |
Quantization Type | gptq|4bit |
Model Architecture | MiniCPMForCausalLM |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.41.1 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 122753 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
MiniCPM 2B DPO Fp16 | 4K / 5.5 GB | 562 | 34 |
MiniCPM 2B 128K | 64K / 6 GB | 247 | 42 |
Sparsing Law 0.1B Relu | 4K / 0.4 GB | 67 | 1 |
MiniCPM 2B Sft Bf16 | 4K / 5.5 GB | 6504 | 118 |
MiniCPM 2B Sft Fp32 | 4K / 10.9 GB | 675 | 296 |
...iCPM 2B RAFT Lora Hotpotqa Dev | 4K / 5.5 GB | 19 | 0 |
MiniCPM Duplex | 4K / 5.5 GB | 14 | 2 |
MiniCPM MoE 8x2B | 4K / 27.7 GB | 139 | 40 |
...iniCPM 2B Sft Fp32 Safetensors | 4K / 10.9 GB | 11 | 1 |
...iniCPM 2B DPO Fp32 Safetensors | 4K / 10.9 GB | 10 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐