Phi 3 Medium 128K Instruct 5.0bpw H6 EXL2 by LoneStriker

 ยป  All LLMs  ยป  LoneStriker  ยป  Phi 3 Medium 128K Instruct 5.0bpw H6 EXL2   URL Share it on

  5-bit   Autotrain compatible   Code   Conversational   Custom code   Endpoints compatible   Exl2   Instruct   Multilingual   Phi3   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Phi 3 Medium 128K Instruct 5.0bpw H6 EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Phi 3 Medium 128K Instruct 5.0bpw H6 EXL2 (LoneStriker/Phi-3-medium-128k-instruct-5.0bpw-h6-exl2)

Phi 3 Medium 128K Instruct 5.0bpw H6 EXL2 Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
commercial, research
Applications:
General purpose AI systems, applications requiring strong reasoning
Primary Use Cases:
Memory/compute constrained environments, Latency bound scenarios, Reasoning (code, math, logic)
Limitations:
Not evaluated for all downstream purposes, consider AI limitations., Accurate, safe, and fair use in high-risk scenarios require additional evaluations.
Considerations:
Adhere to laws and regulations; implement debiasing techniques in applications.
Supported Languages 
Multilingual (English (primary language), other languages (worse performance))
Training Details 
Data Sources:
Publicly available documents, Filtered documents, High-quality educational data, Code, Synthetic data, Textbook-like data
Data Volume:
4.8 trillion tokens
Methodology:
Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO)
Context Length:
128000
Training Time:
42 days
Hardware Used:
512 H100-80G GPUs
Model Architecture:
Dense decoder-only Transformer model
Responsible Ai Considerations 
Fairness:
These models can over- or under-represent groups or reinforce demeaning stereotypes.
Transparency:
Phi series models might be unreliable or offensive.
Mitigation Strategies:
Developers should apply debiasing techniques and evaluate for fairness, safety, and accuracy.
Input Output 
Input Format:
Prompts using chat format with given templates
Accepted Modalities:
Text
Output Format:
Generated text
LLM NamePhi 3 Medium 128K Instruct 5.0bpw H6 EXL2
Repository ๐Ÿค—https://huggingface.co/LoneStriker/Phi-3-medium-128k-instruct-5.0bpw-h6-exl2 
Required VRAM8.9 GB
Updated2025-01-30
MaintainerLoneStriker
Model Typephi3
Instruction-BasedYes
Model Files  8.5 GB: 1-of-2   0.4 GB: 2-of-2
Quantization Typeexl2
Model ArchitecturePhi3ForCausalLM
Licensemit
Context Length131072
Model Max Length131072
Transformers Version4.39.3
Tokenizer ClassLlamaTokenizer
Padding Token<|endoftext|>
Vocabulary Size32064
Torch Data Typebfloat16

Best Alternatives to Phi 3 Medium 128K Instruct 5.0bpw H6 EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...m 128K Instruct 6.0bpw H6 EXL2128K / 10.7 GB93
...m 128K Instruct 8.0bpw H8 EXL2128K / 13.4 GB44
...dium 128K Instruct 8 0bpw EXL2128K / 13.4 GB41
...m 128K Instruct 3.0bpw H6 EXL2128K / 5.6 GB50
...28K Instruct Ov Fp16 Int4 Asym128K / 2.5 GB50
...128K Instruct HQQ 4bit Smashed128K / 2.3 GB80
...128K Instruct HQQ 2bit Smashed128K / 1.4 GB60
Phi 3 Mini 4K Instruct Fp164K /  GB5083
NuExtract Bpw6 EXL24K / 3 GB41
...Mini 4K Geminified 3 0bpw EXL24K / 1.6 GB50
Note: green Score (e.g. "73.2") means that the model is better than LoneStriker/Phi-3-medium-128k-instruct-5.0bpw-h6-exl2.

Rank the Phi 3 Medium 128K Instruct 5.0bpw H6 EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42463 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227