Dolphin 2.9.2 Phi 3 Medium Abliterated by cognitivecomputations

 ยป  All LLMs  ยป  cognitivecomputations  ยป  Dolphin 2.9.2 Phi 3 Medium Abliterated   URL Share it on

  Autotrain compatible Base model:finetune:unsloth/ph... Base model:unsloth/phi-3-mini-...   Conversational Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:cognitivecomputations/...   Dataset:internlm/agent-flan Dataset:m-a-p/codefeedback-fil... Dataset:microsoft/orca-math-wo...   Dataset:teknium/openhermes-2.5   En   Endpoints compatible   Instruct   Mistral   Region:us   Safetensors   Sharded   Tensorflow

Dolphin 2.9.2 Phi 3 Medium Abliterated Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dolphin 2.9.2 Phi 3 Medium Abliterated (cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated)

Dolphin 2.9.2 Phi 3 Medium Abliterated Parameters and Internals

Model Type 
text generation, instruction following, conversational, coding
Use Cases 
Areas:
research, software development
Applications:
AI assistants, agentic capabilities, function integration
Primary Use Cases:
instruction following, conversational AI, coding support, function calling integration
Limitations:
uncensored nature requires ensuring external alignment layers
Additional Notes 
The model is uncensored while minimizing impact on other features.
Supported Languages 
en (support)
Training Details 
Data Sources:
cognitivecomputations/Dolphin-2.9.2, teknium/OpenHermes-2.5, m-a-p/CodeFeedback-Filtered-Instruction, cognitivecomputations/dolphin-coder, cognitivecomputations/samantha-data, microsoft/orca-math-word-problems-200k, internlm/Agent-FLAN, cognitivecomputations/SystemChat-2.0
Methodology:
qLoRA fine-tuning
Context Length:
4000
Training Time:
3.5 days
Hardware Used:
8xL40S node by Crusoe Cloud
Responsible Ai Considerations 
Accountability:
You are responsible for any content you create using this model.
Mitigation Strategies:
Dataset filtered to remove alignment and bias; users should implement their own alignment layer when exposing as a service.
Input Output 
Input Format:
ChatML prompt template
Accepted Modalities:
text
Output Format:
text responses
LLM NameDolphin 2.9.2 Phi 3 Medium Abliterated
Repository ๐Ÿค—https://huggingface.co/cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated 
Base Model(s)  Phi 3 Mini 4K Instruct   unsloth/Phi-3-mini-4k-instruct
Model Size14b
Required VRAM28 GB
Updated2025-01-15
Maintainercognitivecomputations
Model Typemistral
Instruction-BasedYes
Model Files  4.9 GB: 1-of-6   5.0 GB: 2-of-6   4.9 GB: 3-of-6   5.0 GB: 4-of-6   5.0 GB: 5-of-6   3.2 GB: 6-of-6
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licensemit
Context Length4096
Model Max Length4096
Transformers Version4.40.1
Tokenizer ClassLlamaTokenizer
Padding Token<|placeholder6|>
Vocabulary Size32064
Torch Data Typebfloat16

Best Alternatives to Dolphin 2.9.2 Phi 3 Medium Abliterated

Best Alternatives
Context / RAM
Downloads
Likes
...ral Nemo Instruct 14B Merge V11000K / 24.6 GB190
Phi 3 Medium 4K Instruct4K / 28 GB276827
Phi3 Translator Merged34K / 55 GB220
Phi 3 Medium Llamaish4K / 28 GB211
...ll Phi 3 Medium 4K Inst Philos4K / 28 GB100
Phi 3 Unsloth Finetune 74K / 28 GB180
Phi 3 Unsloth Finetune 54K / 28 GB200
Phi 3 Unsloth Finetune 64K / 28 GB190
Phi 3 Unsloth Finetune 44K / 28 GB180
Phi 3 Unsloth Finetuned 34K / 28 GB190

Rank the Dolphin 2.9.2 Phi 3 Medium Abliterated Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41363 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227