Dolphin 2.9 Llama3 8B 1M by cognitivecomputations

 ยป  All LLMs  ยป  cognitivecomputations  ยป  Dolphin 2.9 Llama3 8B 1M   URL Share it on

  Autotrain compatible   Axolotl Base model:finetune:meta-llama... Base model:meta-llama/meta-lla...   Conversational Dataset:abacusai/systemchat-1.... Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:huggingfaceh4/ultracha...   Dataset:internlm/agent-flan Dataset:locutusque/function-ca... Dataset:m-a-p/codefeedback-fil... Dataset:microsoft/orca-math-wo...   Dataset:teknium/openhermes-2.5   Endpoints compatible   Generated from trainer   Llama   Region:us   Safetensors   Sharded   Tensorflow

Dolphin 2.9 Llama3 8B 1M Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dolphin 2.9 Llama3 8B 1M (cognitivecomputations/dolphin-2.9-llama3-8b-1m)

Dolphin 2.9 Llama3 8B 1M Parameters and Internals

Model Type 
text generation
Additional Notes 
Dolphin is highly compliant with any requests, even unethical ones. Alignment and bias have been removed from the dataset, making the model more compliant. Users should implement their own alignment layers before using as a service.
Training Details 
Data Sources:
GPT4, cognitivecomputations/Dolphin-2.9, teknium/OpenHermes-2.5, m-a-p/CodeFeedback-Filtered-Instruction, cognitivecomputations/dolphin-coder, cognitivecomputations/samantha-data, HuggingFaceH4/ultrachat_200k, microsoft/orca-math-word-problems-200k, abacusai/SystemChat-1.1, Locutusque/function-calling-chatml, internlm/Agent-FLAN
Context Length:
1000000
Training Time:
2.5 days
Hardware Used:
8x L40S GPUs
Input Output 
Input Format:
ChatML prompt template format
Accepted Modalities:
text
Output Format:
assistant response
LLM NameDolphin 2.9 Llama3 8B 1M
Repository ๐Ÿค—https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b-1m 
Base Model(s)  Meta Llama 3 8B   meta-llama/Meta-Llama-3-8B
Model Size8b
Required VRAM16.1 GB
Updated2025-03-22
Maintainercognitivecomputations
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length1048576
Model Max Length1048576
Transformers Version4.40.1
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128258
Torch Data Typefloat16

Best Alternatives to Dolphin 2.9 Llama3 8B 1M

Best Alternatives
Context / RAM
Downloads
Likes
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB5116682
A181024K / 16.1 GB2720
A121024K / 16.1 GB2560
B51024K / 16.1 GB1470
C311024K / 16.1 GB1830
A151024K / 16.1 GB1600
A51024K / 16.1 GB1500
A131024K / 16.1 GB2360
C351024K / 16.1 GB2360
A81024K / 16.1 GB1720
Note: green Score (e.g. "73.2") means that the model is better than cognitivecomputations/dolphin-2.9-llama3-8b-1m.

Rank the Dolphin 2.9 Llama3 8B 1M Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 45389 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227