LLM Name | Phi4 MedIT 10B O1 |
Repository ๐ค | https://huggingface.co/mkurman/phi4-MedIT-10B-o1 |
Model Size | 10b |
Required VRAM | 20.6 GB |
Updated | 2025-05-15 |
Maintainer | mkurman |
Model Type | llama |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | LlamaForCausalLM |
License | mit |
Context Length | 16384 |
Model Max Length | 16384 |
Transformers Version | 4.46.2 |
Tokenizer Class | GPT2Tokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 100352 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Falcon3 10B Instruct 1.58bit | 32K / 4 GB | 256 | 13 |
...truct Gptqmodel 4bit Vortex V1 | 32K / 5.6 GB | 6 | 3 |
...enbuddy Falcon3 10B V24.2 131K | 128K / 20.7 GB | 5 | 0 |
Priya 10B | 128K / 20.5 GB | 13 | 1 |
HelpingAI2.5 10B | 128K / 20.5 GB | 66 | 2 |
HelpingAI2.5 10B | 128K / 20.5 GB | 613 | 5 |
L3.1 Mochav2 10B | 128K / 42.8 GB | 21 | 0 |
HELVETE X | 128K / 20.5 GB | 18 | 6 |
Yarn Solar 10B 64K | 64K / 21.4 GB | 2248 | 15 |
StoryTeller 10B 2e V2 | 58K / 21.4 GB | 5 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐