Dolphin 2.9.1 Phi 3 Kensho 4.5B AWQ by vaclavkosar

 ยป  All LLMs  ยป  vaclavkosar  ยป  Dolphin 2.9.1 Phi 3 Kensho 4.5B AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq Base model:quantized:unsloth/p... Base model:unsloth/phi-3-mini-...   Conversational Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:cognitivecomputations/...   Dataset:internlm/agent-flan Dataset:locutusque/function-ca... Dataset:m-a-p/codefeedback-fil... Dataset:microsoft/orca-math-wo...   Dataset:teknium/openhermes-2.5   En   Endpoints compatible   Instruct   Mistral   Quantized   Region:us   Safetensors

Dolphin 2.9.1 Phi 3 Kensho 4.5B AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dolphin 2.9.1 Phi 3 Kensho 4.5B AWQ (vaclavkosar/Dolphin-2.9.1-Phi-3-Kensho-4.5B-AWQ)

Dolphin 2.9.1 Phi 3 Kensho 4.5B AWQ Parameters and Internals

Model Type 
Instruction, Conversational, Coding
Additional Notes 
The dataset was filtered to remove alignment and bias, making the model more compliant. An additional alignment layer is advised before deploying as a service.
Supported Languages 
en (English)
Training Details 
Data Sources:
cognitivecomputations/Dolphin-2.9, teknium/OpenHermes-2.5, m-a-p/CodeFeedback-Filtered-Instruction, cognitivecomputations/dolphin-coder, cognitivecomputations/samantha-data, microsoft/orca-math-word-problems-200k, Locutusque/function-calling-chatml, internlm/Agent-FLAN
Methodology:
This model utilizes PEFT layer replication at inference time to duplicate layers and increase parameter count. The qLoRA fine-tuning was with 4k sequence length.
Context Length:
4000
Training Time:
2.5 days
Hardware Used:
8xL40S node provided by Crusoe Cloud
Input Output 
Input Format:
ChatML prompt template format
LLM NameDolphin 2.9.1 Phi 3 Kensho 4.5B AWQ
Repository ๐Ÿค—https://huggingface.co/vaclavkosar/Dolphin-2.9.1-Phi-3-Kensho-4.5B-AWQ 
Base Model(s)  Phi 3 Mini 4K Instruct   unsloth/Phi-3-mini-4k-instruct
Model Size4.5b
Required VRAM3.7 GB
Updated2025-02-12
Maintainervaclavkosar
Model Typemistral
Instruction-BasedYes
Model Files  3.7 GB
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureMistralForCausalLM
Licensemit
Context Length4096
Model Max Length4096
Transformers Version4.41.0
Tokenizer ClassLlamaTokenizer
Padding Token<|endoftext|>
Vocabulary Size32064
Torch Data Typefloat16

Best Alternatives to Dolphin 2.9.1 Phi 3 Kensho 4.5B AWQ

Best Alternatives
Context / RAM
Downloads
Likes
...olphin 2.9.1 Phi 3 Kensho 4.5B4K / 7.6 GB11631
...i 3 Kensho 4.5B Abliterated V34K / 13.1 GB2710
Note: green Score (e.g. "73.2") means that the model is better than vaclavkosar/Dolphin-2.9.1-Phi-3-Kensho-4.5B-AWQ.

Rank the Dolphin 2.9.1 Phi 3 Kensho 4.5B AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42980 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227