10K Continued Pretraining Phi 3 Mini 4K Instruct Unsloth Merged 16bit by FlofloB

 ยป  All LLMs  ยป  FlofloB  ยป  10K Continued Pretraining Phi 3 Mini 4K Instruct Unsloth Merged 16bit   URL Share it on

  Merged Model   4bit   Autotrain compatible   Conversational Dataset:uncovai/fineweb cc-mai...   En   Endpoints compatible   Instruct   Mistral   Phi-3   Phi3   Pytorch   Quantized   Region:us   Sft   Sharded   Trl   Unsloth

10k Continued Pretraining Phi 3 Mini 4K Instruct Unsloth Merged 16bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
10K Continued Pretraining Phi 3 Mini 4K Instruct Unsloth Merged 16bit (FlofloB/10k_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit)

10K Continued Pretraining Phi 3 Mini 4K Instruct Unsloth Merged 16bit Parameters and Internals

LLM Name10k Continued Pretraining Phi 3 Mini 4K Instruct Unsloth Merged 16bit
Repository ๐Ÿค—https://huggingface.co/FlofloB/10k_continued_pretraining_Phi-3-mini-4k-instruct_Unsloth_merged_16bit 
Base Model(s)  unsloth/phi-3-mini-4k-instruct-bnb-4bit   unsloth/phi-3-mini-4k-instruct-bnb-4bit
Merged ModelYes
Required VRAM7.6 GB
Updated2025-03-12
MaintainerFlofloB
Model Typemistral
Instruction-BasedYes
Model Files  5.0 GB: 1-of-2   2.6 GB: 2-of-2
Supported Languagesen
Quantization Type4bit
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.46.2
Tokenizer ClassLlamaTokenizer
Padding Token<|placeholder6|>
Vocabulary Size32064
Torch Data Typefloat16

Best Alternatives to 10K Continued Pretraining Phi 3 Mini 4K Instruct Unsloth Merged 16bit

Best Alternatives
Context / RAM
Downloads
Likes
...eZephir Sft Instruct Ead 16bit32K / 14.4 GB560
... Mini 4K Instruct Bnb 4bit Ita4K / 7.6 GB27690
...ini New Model With Lora Merged4K / 7.6 GB680
Phi3 History V24K / 7.6 GB910
Phired4K / 7.6 GB120
Phi 3 Mini Hospital Topic 504K / 7.6 GB190
MainPHI34K / 7.6 GB110
Model4K / 7.6 GB50
Model4K / 7.6 GB100
Phi3 Finetune Test4K / 7.6 GB80

Rank the 10K Continued Pretraining Phi 3 Mini 4K Instruct Unsloth Merged 16bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 44887 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227