Microsoft Phi 3 Mini 128K Instruct HQQ 2bit Smashed by PrunaAI

 ยป  All LLMs  ยป  PrunaAI  ยป  Microsoft Phi 3 Mini 128K Instruct HQQ 2bit Smashed   URL Share it on

  2bit   Autotrain compatible Base model:finetune:microsoft/... Base model:microsoft/phi-3-min...   Conversational   Custom code   Endpoints compatible   Instruct   Phi3   Pruna-ai   Quantized   Region:us

Microsoft Phi 3 Mini 128K Instruct HQQ 2bit Smashed Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Microsoft Phi 3 Mini 128K Instruct HQQ 2bit Smashed (PrunaAI/microsoft-Phi-3-mini-128k-instruct-HQQ-2bit-smashed)

Microsoft Phi 3 Mini 128K Instruct HQQ 2bit Smashed Parameters and Internals

Model Type 
compressed, optimized
Use Cases 
Limitations:
The quality of the model output might vary compared to the base model.
Considerations:
Efficiency results may vary in other settings. We recommend running them in the use-case conditions.
Additional Notes 
To compress your own models, contact PrunaAI for premium access and tech support for specific use-cases.
Training Details 
Methodology:
The model is compressed with hqq.
Input Output 
Input Format:
Tokens from text queries (e.g. 'What is the color of prunes?')
Accepted Modalities:
text
Output Format:
Text generation (decoded response)
Performance Tips:
Directly assess efficiency gains in your use-cases.
LLM NameMicrosoft Phi 3 Mini 128K Instruct HQQ 2bit Smashed
Repository ๐Ÿค—https://huggingface.co/PrunaAI/microsoft-Phi-3-mini-128k-instruct-HQQ-2bit-smashed 
Base Model(s)  Phi 3 Mini 128K Instruct   microsoft/Phi-3-mini-128k-instruct
Required VRAM1.4 GB
Updated2025-02-03
MaintainerPrunaAI
Model Typephi3
Instruction-BasedYes
Model Files  1.4 GB
Quantization Type2bit
Model ArchitecturePhi3ForCausalLM
Context Length131072
Model Max Length131072
Transformers Version4.42.4
Tokenizer ClassLlamaTokenizer
Padding Token<|endoftext|>
Vocabulary Size32064
Torch Data Typebfloat16

Best Alternatives to Microsoft Phi 3 Mini 128K Instruct HQQ 2bit Smashed

Best Alternatives
Context / RAM
Downloads
Likes
...m 128K Instruct 6.0bpw H6 EXL2128K / 10.7 GB93
...m 128K Instruct 8.0bpw H8 EXL2128K / 13.4 GB44
...dium 128K Instruct 8 0bpw EXL2128K / 13.4 GB41
...m 128K Instruct 3.0bpw H6 EXL2128K / 5.6 GB50
...m 128K Instruct 5.0bpw H6 EXL2128K / 8.9 GB50
...28K Instruct Ov Fp16 Int4 Asym128K / 2.5 GB50
...128K Instruct HQQ 4bit Smashed128K / 2.3 GB80
Phi 3 Mini 4K Instruct Fp164K /  GB5083
NuExtract Bpw6 EXL24K / 3 GB41
...Mini 4K Geminified 3 0bpw EXL24K / 1.6 GB50
Note: green Score (e.g. "73.2") means that the model is better than PrunaAI/microsoft-Phi-3-mini-128k-instruct-HQQ-2bit-smashed.

Rank the Microsoft Phi 3 Mini 128K Instruct HQQ 2bit Smashed Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42463 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227