Dolphin 2.9.1 Phi 3 Kensho 4.5B by cognitivecomputations

 ยป  All LLMs  ยป  cognitivecomputations  ยป  Dolphin 2.9.1 Phi 3 Kensho 4.5B   URL Share it on

  Autotrain compatible Base model:finetune:unsloth/ph... Base model:unsloth/phi-3-mini-...   Conversational Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:cognitivecomputations/...   Dataset:internlm/agent-flan Dataset:locutusque/function-ca... Dataset:m-a-p/codefeedback-fil... Dataset:microsoft/orca-math-wo...   Dataset:teknium/openhermes-2.5   En   Endpoints compatible   Instruct   Mistral   Pytorch   Region:us   Safetensors   Sharded

Dolphin 2.9.1 Phi 3 Kensho 4.5B Benchmarks

Dolphin 2.9.1 Phi 3 Kensho 4.5B (cognitivecomputations/Dolphin-2.9.1-Phi-3-Kensho-4.5B)

Dolphin 2.9.1 Phi 3 Kensho 4.5B Parameters and Internals

Model Type 
instruction, conversational, coding, function calling
Use Cases 
Areas:
research, commercial applications
Applications:
instruction-following, conversational agents, coding assistants, function calling
Primary Use Cases:
chatbots, programming, AI assistance
Limitations:
Uncensored model, implement own alignment layer, Ensure responsible use
Considerations:
Implement your own mitigation strategies.
Additional Notes 
Utilizes PEFT layer replication at inference to increase parameter count. Adapter method reduces VRAM usage. Based on Unsloth's Mistralfied Phi-3-Instruct-4k. Acknowledgements to Crusoe Cloud for hardware support.
Supported Languages 
en (fluent)
Training Details 
Data Sources:
cognitivecomputations/Dolphin-2.9, teknium/OpenHermes-2.5, m-a-p/CodeFeedback-Filtered-Instruction, cognitivecomputations/dolphin-coder, cognitivecomputations/samantha-data, microsoft/orca-math-word-problems-200k, Locutusque/function-calling-chatml, internlm/Agent-FLAN
Methodology:
qLoRA fine-tuning with 4k sequence length
Context Length:
4000
Training Time:
2.5 days on 8xL40S node
Hardware Used:
8xL40S node
Responsible Ai Considerations 
Fairness:
Dataset was filtered to remove alignment and bias.
Transparency:
Read blog post about uncensored models.
Accountability:
Users are responsible for content created.
Mitigation Strategies:
Implement your own alignment layer before exposing the model as a service.
Input Output 
Input Format:
ChatML prompt template
Accepted Modalities:
text
LLM NameDolphin 2.9.1 Phi 3 Kensho 4.5B
Repository ๐Ÿค—https://huggingface.co/cognitivecomputations/Dolphin-2.9.1-Phi-3-Kensho-4.5B 
Base Model(s)  Phi 3 Mini 4K Instruct   unsloth/Phi-3-mini-4k-instruct
Model Size4.5b
Required VRAM7.6 GB
Updated2025-03-14
Maintainercognitivecomputations
Model Typemistral
Instruction-BasedYes
Model Files  5.0 GB: 1-of-2   2.6 GB: 2-of-2
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licensemit
Context Length4096
Model Max Length4096
Transformers Version4.40.2
Tokenizer ClassLlamaTokenizer
Padding Token<|endoftext|>
Vocabulary Size32064
Torch Data Typebfloat16

Quantized Models of the Dolphin 2.9.1 Phi 3 Kensho 4.5B

Model
Likes
Downloads
VRAM
...3 Kensho 4.5B AWQ 4bit Smashed073 GB
...3 Kensho 4.5B Bnb 4bit Smashed0224 GB

Best Alternatives to Dolphin 2.9.1 Phi 3 Kensho 4.5B

Best Alternatives
Context / RAM
Downloads
Likes
...i 3 Kensho 4.5B Abliterated V34K / 13.1 GB3410
...in 2.9.1 Phi 3 Kensho 4.5B AWQ4K / 3.7 GB110

Rank the Dolphin 2.9.1 Phi 3 Kensho 4.5B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 45019 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227