Model Type |
| |
Additional Notes |
|
LLM Name | Cinder Phi 2 V1 F16 Gguf |
Repository ๐ค | https://huggingface.co/Josephgflowers/Cinder-Phi-2-V1-F16-gguf |
Merged Model | Yes |
Model Size | 2.8b |
Required VRAM | 5.6 GB |
Updated | 2025-02-22 |
Maintainer | Josephgflowers |
Model Type | phi |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | fp16|gguf |
Model Architecture | PhiForCausalLM |
License | mit |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.38.0.dev0 |
Tokenizer Class | CodeGenTokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 51200 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Cinder Phi 2 Test 1 | 2K / 11.1 GB | 118 | 0 |
Phi 2 Lora Cfr | 2K / 11.1 GB | 19 | 0 |
Candle Phi 2 Old | 2K / 5.6 GB | 24 | 0 |
Candle Phi 2 | 2K / 5.6 GB | 22 | 0 |
Phi2 Platypus | 0K / 11.1 GB | 5 | 0 |
Bnb DPO 8bit | 2K / 3 GB | 7 | 0 |
Phi 2 4bit 64rank | 2K / 5.6 GB | 219 | 0 |
Phi 2 Instruct V1 | 2K / 11.1 GB | 15 | 0 |
Phi 2 Nf4 Fp16 Upscaled | 2K / 5.6 GB | 56 | 0 |
MFANN3bv0.24 | 128K / 11.1 GB | 16 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐