Phi 2 Ov Quantized by Intel

 ยป  All LLMs  ยป  Intel  ยป  Phi 2 Ov Quantized   URL Share it on

  Autotrain compatible   Code   En   Endpoints compatible   Openvino   Phi   Region:us

Phi 2 Ov Quantized Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Phi 2 Ov Quantized (Intel/phi-2-ov-quantized)

Phi 2 Ov Quantized Parameters and Internals

Model Type 
Transformer-based, text-generation
Use Cases 
Areas:
Research, Commercial applications
Primary Use Cases:
QA Format, Chat Format, Code Format
Limitations:
Generate Inaccurate Code and Facts, Limited Scope for code, Unreliable Responses to Instruction, Language Limitations, Potential Societal Biases, Toxicity, Verbosity
Considerations:
Users should treat generated content as starting points and verify all outputs, especially in unfamiliar contexts.
Supported Languages 
English (standard)
Training Details 
Data Sources:
NLP synthetic texts, filtered websites
Model Architecture:
Transformer
LLM NamePhi 2 Ov Quantized
Repository ๐Ÿค—https://huggingface.co/Intel/phi-2-ov-quantized 
Required VRAM1.9 GB
Updated2024-12-22
MaintainerIntel
Model Typephi
Model Files  1.9 GB
Supported Languagesen
Model ArchitecturePhiForCausalLM
Licensemit
Context Length2048
Model Max Length2048
Transformers Version4.37.0
Tokenizer ClassCodeGenTokenizer
Vocabulary Size51200
Torch Data Typefloat16

Quantized Models of the Phi 2 Ov Quantized

Model
Likes
Downloads
VRAM
Phi 2 Quantized114231 GB

Best Alternatives to Phi 2 Ov Quantized

Best Alternatives
Context / RAM
Downloads
Likes
Phi1.5 Quantized 22K /  GB90
Merlin1.42K / 5.6 GB680
Merlin1.52K / 5.6 GB750
Merlin1.22K / 5.6 GB720
Merlin1.32K / 5.6 GB720
Tinyllama Quant2K / 0.8 GB71
Phi1.5 Update 42K /  GB70
Phi 2 Fp82K / 3.3 GB80
Phi 2 Int8 Ov2K / 0 GB510
Phi Bode 2 Ultraalpaca2K / 5.6 GB172
Note: green Score (e.g. "73.2") means that the model is better than Intel/phi-2-ov-quantized.

Rank the Phi 2 Ov Quantized Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40123 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217