Fp4 14B V1 Fix by ehristoforu

 ยป  All LLMs  ยป  ehristoforu  ยป  Fp4 14B V1 Fix   URL Share it on

  Merged Model   Arxiv:2403.19522   Autotrain compatible Base model:bunnycore/phi-4-rp-... Base model:mudler/localai-func... Base model:pinkstack/superthou... Base model:prithivmlmods/phi-4... Base model:prithivmlmods/phi-4... Base model:prithivmlmods/phi-4... Base model:prithivmlmods/phi-4...   Base model:unsloth/phi-4   Conversational   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow

Fp4 14B V1 Fix Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Fp4 14B V1 Fix (ehristoforu/fp4-14b-v1-fix)

Fp4 14B V1 Fix Parameters and Internals

LLM NameFp4 14B V1 Fix
Repository ๐Ÿค—https://huggingface.co/ehristoforu/fp4-14b-v1-fix 
Base Model(s)  Phi 4 QwQ   bunnycore/Phi-4-RP-V0.2   Phi 4   Phi 4 Empathetic   mudler/LocalAI-functioncall-phi-4-v0.3   ...perThoughts CoT 14B 16K O1 QwQ   Phi 4 O1   Phi 4 Math IO   prithivMLmods/Phi-4-QwQ   bunnycore/Phi-4-RP-V0.2   unsloth/phi-4   prithivMLmods/Phi-4-Empathetic   mudler/LocalAI-functioncall-phi-4-v0.3   Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ   prithivMLmods/Phi-4-o1   prithivMLmods/Phi-4-Math-IO
Merged ModelYes
Model Size14b
Required VRAM29.5 GB
Updated2025-04-15
Maintainerehristoforu
Model Typellama
Model Files  5.0 GB: 1-of-6   5.0 GB: 2-of-6   5.0 GB: 3-of-6   4.9 GB: 4-of-6   5.0 GB: 5-of-6   4.6 GB: 6-of-6
Model ArchitectureLlamaForCausalLM
Context Length16384
Model Max Length16384
Transformers Version4.46.2
Tokenizer ClassGPT2Tokenizer
Padding Token<|dummy_87|>
Vocabulary Size100352
Torch Data Typebfloat16

Best Alternatives to Fp4 14B V1 Fix

Best Alternatives
Context / RAM
Downloads
Likes
...Qwen2.5llamaify 14B V23.1 200K195K / 29.7 GB12371
...Qwen2.5llamaify 14B V23.3 200K195K / 29.7 GB115
GeM2 Llamion 14B LongChat195K / 29 GB11964
Openbuddy Zero 14B V22.3 32K32K / 28 GB101
Rune 14B16K / 29.5 GB451
Rosemary 14B16K / 29.5 GB570
Sake 20B16K / 43.1 GB220
Phi Line 14B16K / 29.4 GB3214
Phi 4 ReasoningRP16K / 29.5 GB252
Loke 14B Sce16K / 29.5 GB01
Note: green Score (e.g. "73.2") means that the model is better than ehristoforu/fp4-14b-v1-fix.

Rank the Fp4 14B V1 Fix Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 47272 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227