Phi 2 Pytdml by microsoft

 ยป  All LLMs  ยป  microsoft  ยป  Phi 2 Pytdml   URL Share it on

  Autotrain compatible   Code   En   Endpoints compatible   Phi   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/microsoft/phi-2-pytdml 

Phi 2 Pytdml Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Phi 2 Pytdml (microsoft/phi-2-pytdml)

Phi 2 Pytdml Parameters and Internals

Model Type 
Transformer, text generation
Use Cases 
Areas:
research
Primary Use Cases:
QA format, chat format, code format
Limitations:
Generate Inaccurate Code and Facts, Limited Scope for code, Unreliable Responses to Instruction, Language Limitations, Potential Societal Biases, Toxicity, Verbosity
Additional Notes 
Phi-2 has been optimized with DirectML and the model maintains compatibility with original checkpoints though some weights need updates.
Supported Languages 
en (standard)
Training Details 
Data Sources:
NLP synthetic data created by AOAI GPT-3.5, filtered web data from Falcon RefinedWeb, SlimPajama
Data Volume:
250 billion tokens
Methodology:
next-word prediction objective
Context Length:
2048
Training Time:
14 days
Hardware Used:
96xA100-80G
Model Architecture:
Transformer-based model
LLM NamePhi 2 Pytdml
Repository ๐Ÿค—https://huggingface.co/microsoft/phi-2-pytdml 
Model Size2.8b
Required VRAM5.6 GB
Updated2025-02-22
Maintainermicrosoft
Model Typephi
Model Files  5.0 GB: 1-of-2   0.6 GB: 2-of-2
Supported Languagesen
Model ArchitecturePhiForCausalLM
Licensemit
Context Length2048
Model Max Length2048
Transformers Version4.37.0
Tokenizer ClassCodeGenTokenizer
Vocabulary Size51200
Torch Data Typefloat16

Best Alternatives to Phi 2 Pytdml

Best Alternatives
Context / RAM
Downloads
Likes
MFANN3bv0.24128K / 11.1 GB160
MFANN3b128K / 11.1 GB660
MFANN Phigments Slerp V3.2128K / 5.6 GB330
MFANN3bv1.4128K / 11.1 GB220
MFANN3bv1.3128K / 11.1 GB250
MFANN3bv0.23128K / 11.1 GB360
MFANN3bv1.1128K / 11.1 GB130
MFANN3b SFT128K / 5.6 GB650
MFANN3b Rebase128K / 11.1 GB130
MFANN3bv1.2126K / 11.1 GB390
Note: green Score (e.g. "73.2") means that the model is better than microsoft/phi-2-pytdml.

Rank the Phi 2 Pytdml Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227