Chemeng Phi 3 Mini 4K Instruct Bnb 4bit 16 4 100 1 Nonmath by kms7530

 ยป  All LLMs  ยป  kms7530  ยป  Chemeng Phi 3 Mini 4K Instruct Bnb 4bit 16 4 100 1 Nonmath   URL Share it on

  4bit   Autotrain compatible Base model:quantized:unsloth/p... Base model:unsloth/phi-3-mini-...   Conversational   Dataset:kms7530/chemeng   En   Endpoints compatible   Gguf   Instruct   Lora   Mistral   Pytorch   Quantized   Region:us   Safetensors   Sft   Sharded   Tensorflow   Trl   Unsloth

Chemeng Phi 3 Mini 4K Instruct Bnb 4bit 16 4 100 1 Nonmath Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Chemeng Phi 3 Mini 4K Instruct Bnb 4bit 16 4 100 1 Nonmath (kms7530/chemeng_phi-3-mini-4k-instruct-bnb-4bit_16_4_100_1_nonmath)

Chemeng Phi 3 Mini 4K Instruct Bnb 4bit 16 4 100 1 Nonmath Parameters and Internals

LLM NameChemeng Phi 3 Mini 4K Instruct Bnb 4bit 16 4 100 1 Nonmath
Repository ๐Ÿค—https://huggingface.co/kms7530/chemeng_phi-3-mini-4k-instruct-bnb-4bit_16_4_100_1_nonmath 
Base Model(s)  ...hi 3 Mini 4K Instruct Bnb 4bit   unsloth/Phi-3-mini-4k-instruct-bnb-4bit
Model Size3.8b
Required VRAM7.6 GB
Updated2025-02-22
Maintainerkms7530
Instruction-BasedYes
Model Files  0.1 GB   5.0 GB: 1-of-2   2.6 GB: 2-of-2   5.0 GB: 1-of-2   2.6 GB: 2-of-2   2.3 GB   2.7 GB   4.1 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf|4bit
Model ArchitectureAutoModelForCausalLM
Licenseapache-2.0
Model Max Length4096
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token<|placeholder6|>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesk_proj|q_proj|up_proj|down_proj|v_proj|o_proj|gate_proj
LoRA Alpha16
LoRA Dropout0
R Param16

Best Alternatives to Chemeng Phi 3 Mini 4K Instruct Bnb 4bit 16 4 100 1 Nonmath

Best Alternatives
Context / RAM
Downloads
Likes
Phi 3 Mini Instruct 16bit0K / 7.6 GB760
Followup V60K / 7.6 GB00

Rank the Chemeng Phi 3 Mini 4K Instruct Bnb 4bit 16 4 100 1 Nonmath Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227