Microsoft Phi 3 Medium 128K Instruct 8 0bpw EXL2 by Zoyd

 ยป  All LLMs  ยป  Zoyd  ยป  Microsoft Phi 3 Medium 128K Instruct 8 0bpw EXL2   URL Share it on

  8-bit   Autotrain compatible   Code   Conversational   Custom code   Endpoints compatible   Exl2   Instruct   Multilingual   Phi3   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Microsoft Phi 3 Medium 128K Instruct 8 0bpw EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Microsoft Phi 3 Medium 128K Instruct 8 0bpw EXL2 (Zoyd/microsoft_Phi-3-medium-128k-instruct-8_0bpw_exl2)

Microsoft Phi 3 Medium 128K Instruct 8 0bpw EXL2 Parameters and Internals

Model Type 
text generation
Use Cases 
Applications:
General purpose AI systems, Research, Commercial applications
Primary Use Cases:
Memory/compute constrained environments, Latency bound scenarios, Strong reasoning: code, math, and logic
Limitations:
Models may not be suitable for high-risk scenarios without assessments.
Considerations:
Models are not specifically evaluated for all downstream purposes.
Additional Notes 
The Phi-3 Medium models can run on multiple platforms with optimized configurations through ONNX.
Supported Languages 
primary_language (English), other_languages ()
Training Details 
Data Sources:
Publicly available documents, High-quality educational data, Code, Newly created synthetic data, High quality chat format supervised data
Data Volume:
4.8 trillion tokens
Methodology:
Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO)
Context Length:
128000
Training Time:
42 days
Hardware Used:
512 H100-80G GPUs
Model Architecture:
Dense decoder-only Transformer
Responsible Ai Considerations 
Fairness:
Models trained primarily on English text which affects other languages and dialects.
Transparency:
Developers are advised to follow transparency best practices.
Accountability:
Developers are responsible for ensuring compliance with relevant laws and regulations.
Mitigation Strategies:
Suggestions for using safety classifiers or custom solutions.
Input Output 
Input Format:
Chat format with user and assistant roles.
Accepted Modalities:
text
Output Format:
Generated text
LLM NameMicrosoft Phi 3 Medium 128K Instruct 8 0bpw EXL2
Repository ๐Ÿค—https://huggingface.co/Zoyd/microsoft_Phi-3-medium-128k-instruct-8_0bpw_exl2 
Required VRAM13.4 GB
Updated2025-02-03
MaintainerZoyd
Model Typephi3
Instruction-BasedYes
Model Files  8.6 GB: 1-of-2   4.8 GB: 2-of-2
Quantization Typeexl2
Model ArchitecturePhi3ForCausalLM
Licensemit
Context Length131072
Model Max Length131072
Transformers Version4.39.3
Tokenizer ClassLlamaTokenizer
Padding Token<|endoftext|>
Vocabulary Size32064
Torch Data Typebfloat16

Best Alternatives to Microsoft Phi 3 Medium 128K Instruct 8 0bpw EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...m 128K Instruct 6.0bpw H6 EXL2128K / 10.7 GB93
...m 128K Instruct 8.0bpw H8 EXL2128K / 13.4 GB44
...m 128K Instruct 3.0bpw H6 EXL2128K / 5.6 GB50
...m 128K Instruct 5.0bpw H6 EXL2128K / 8.9 GB50
...28K Instruct Ov Fp16 Int4 Asym128K / 2.5 GB50
...128K Instruct HQQ 4bit Smashed128K / 2.3 GB80
...128K Instruct HQQ 2bit Smashed128K / 1.4 GB60
Phi 3 Mini 4K Instruct Fp164K /  GB5083
NuExtract Bpw6 EXL24K / 3 GB41
...Mini 4K Geminified 3 0bpw EXL24K / 1.6 GB50
Note: green Score (e.g. "73.2") means that the model is better than Zoyd/microsoft_Phi-3-medium-128k-instruct-8_0bpw_exl2.

Rank the Microsoft Phi 3 Medium 128K Instruct 8 0bpw EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42463 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227