LLM Name | Phi 4 Unsloth Bnb 4bit |
Repository ๐ค | https://huggingface.co/unsloth/phi-4-unsloth-bnb-4bit |
Base Model(s) | |
Model Size | 8.5b |
Required VRAM | 10.4 GB |
Updated | 2025-05-15 |
Maintainer | unsloth |
Model Type | llama |
Model Files | |
Supported Languages | en |
Quantization Type | 4bit |
Model Architecture | LlamaForCausalLM |
License | mit |
Context Length | 16384 |
Model Max Length | 16384 |
Transformers Version | 4.47.1 |
Tokenizer Class | GPT2Tokenizer |
Padding Token | <|dummy_87|> |
Vocabulary Size | 100352 |
Torch Data Type | bfloat16 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
Imagine V0.5 16bit | 0 | 10 | 29 GB |
ThinkPhi1.1 Tensors | 2 | 24 | 29 GB |
....Turn.R1Distill V1.5.1 Tensors | 4 | 9 | 29 GB |
Phi 4 COT | 0 | 13 | 29 GB |
Phi4 New Params 16bit | 0 | 6 | 29 GB |
...ocalAI Functioncall Phi 4 V0.3 | 8 | 9 | 29 GB |
...i4.Turn.R1Distill V1.0 Tensors | 1 | 0 | 29 GB |
Phi 4 RP Lora Model | 5 | 0 | 0 GB |
Phi 4 Rp V1 Lora | 1 | 0 | 0 GB |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐