LLM Name | Phi 3 Mini 4K Instruct Cinder Llamafied With 16bit GGUF |
Repository ๐ค | https://huggingface.co/Josephgflowers/Phi-3-mini-4k-instruct-Cinder-llamafied-with-16bit-GGUF |
Model Size | 3.8b |
Required VRAM | 7.6 GB |
Updated | 2024-09-18 |
Maintainer | Josephgflowers |
Model Type | llama |
Instruction-Based | Yes |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf|16bit |
Model Architecture | LlamaForCausalLM |
License | mit |
Context Length | 4096 |
Model Max Length | 4096 |
Transformers Version | 4.40.0.dev0 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 32064 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Phi 3.5 Mini Instruct | 128K / 7.6 GB | 13571 | 39 |
Llamaphi 3 128K Instruct | 128K / 7.6 GB | 85 | 0 |
...3 Mini 128K Instruct LLaMAfied | 128K / 7.6 GB | 11 | 2 |
...i 3 Mini 4K Instruct LLaMAfied | 4K / 7.6 GB | 150 | 11 |
Phillama 3.8B V0.1 | 4K / 7.6 GB | 7 | 10 |
Phillama 3.8B V1 | 4K / 7.6 GB | 8 | 5 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐