LLM Name | Phi 2 4bit 64rank |
Repository ๐ค | https://huggingface.co/LoftQ/phi-2-4bit-64rank |
Model Size | 2.8b |
Required VRAM | 5.6 GB |
Updated | 2025-02-05 |
Maintainer | LoftQ |
Model Type | phi |
Model Files | |
Supported Languages | en |
Quantization Type | 4bit |
Model Architecture | PhiForCausalLM |
License | mit |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.39.3 |
Tokenizer Class | CodeGenTokenizer |
Vocabulary Size | 51200 |
LoRA Model | Yes |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Bnb DPO 8bit | 2K / 3 GB | 78 | 0 |
Phi 2 Nf4 Fp16 Upscaled | 2K / 5.6 GB | 26 | 0 |
MFANN3bv0.24 | 128K / 11.1 GB | 5 | 0 |
MFANN3b | 128K / 11.1 GB | 116 | 0 |
MFANN3bv1.3 | 128K / 11.1 GB | 13 | 0 |
MFANN3bv1.1 | 128K / 11.1 GB | 16 | 0 |
MFANN3bv0.23 | 128K / 11.1 GB | 6 | 0 |
MFANN3b SFT | 128K / 5.6 GB | 169 | 0 |
MFANN3b Rebase | 128K / 11.1 GB | 10 | 0 |
MFANN3bv1.2 | 126K / 11.1 GB | 32 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐