Phi4.Turn.R1Distill V1.5.1 Tensors by Quazim0t0

 ยป  All LLMs  ยป  Quazim0t0  ยป  Phi4.Turn.R1Distill V1.5.1 Tensors   URL Share it on

  4bit   Autotrain compatible Base model:finetune:unsloth/ph... Base model:unsloth/phi-4-unslo...   Conversational Dataset:bespokelabs/bespoke-st... Dataset:bespokelabs/bespoke-st... Dataset:novasky-ai/sky-t1 data... Dataset:open-thoughts/openthou... Dataset:quazim0t0/benfordslawr...   En   Endpoints compatible   Gguf   Llama   Model-index   Quantized   Region:us   Safetensors   Sharded   Tensorflow   Unsloth

Phi4.Turn.R1Distill V1.5.1 Tensors Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Phi4.Turn.R1Distill V1.5.1 Tensors (Quazim0t0/Phi4.Turn.R1Distill_v1.5.1-Tensors)

Phi4.Turn.R1Distill V1.5.1 Tensors Parameters and Internals

LLM NamePhi4.Turn.R1Distill V1.5.1 Tensors
Repository ๐Ÿค—https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.5.1-Tensors 
Base Model(s)  Phi 4 Unsloth Bnb 4bit   unsloth/phi-4-unsloth-bnb-4bit
Model Size14.7b
Required VRAM29.4 GB
Updated2025-02-09
MaintainerQuazim0t0
Model Typellama
Model Files  4.9 GB: 1-of-6   5.0 GB: 2-of-6   4.9 GB: 3-of-6   5.0 GB: 4-of-6   5.0 GB: 5-of-6   4.6 GB: 6-of-6
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf|4bit
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length16384
Model Max Length16384
Transformers Version4.47.1
Tokenizer ClassGPT2Tokenizer
Padding Token<|dummy_87|>
Vocabulary Size100352
Torch Data Typebfloat16

Best Alternatives to Phi4.Turn.R1Distill V1.5.1 Tensors

Best Alternatives
Context / RAM
Downloads
Likes
ThinkPhi1.1 Tensors16K / 29.4 GB2022
Unaligned Thinker PHI 416K / 29.4 GB2331
Phi 4 COT16K / 29.4 GB500
...i4.Turn.R1Distill V1.0 Tensors16K / 29.4 GB1651
Phi 4 Llama T1 Full16K / 29.4 GB291
Phi 416K / 29.4 GB2424969
Luminis Phi 416K / 29.5 GB356
Phi 4 Model Stock V416K / 29.5 GB3337
Orca Mini Phi 416K / 29.4 GB4288
Phi 4 RR Shoup16K / 29.5 GB921
Note: green Score (e.g. "73.2") means that the model is better than Quazim0t0/Phi4.Turn.R1Distill_v1.5.1-Tensors.

Rank the Phi4.Turn.R1Distill V1.5.1 Tensors Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42884 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227