Pygmalion 2 13B GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Pygmalion 2 13B GGUF   URL Share it on

Base model:pygmalionai/pygmali... Base model:quantized:pygmalion... Dataset:databricks/databricks-... Dataset:jondurbin/airoboros-gp... Dataset:norquinal/claude multi...   Dataset:open-orca/openorca   Dataset:pygmalionai/pippa   En   Gguf   Instruct   Llama   Quantized   Region:us

Pygmalion 2 13B GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Pygmalion 2 13B GGUF (TheBloke/Pygmalion-2-13B-GGUF)

Pygmalion 2 13B GGUF Parameters and Internals

Model Type 
text generation, instruct
Use Cases 
Primary Use Cases:
Fictional writing for entertainment purposes
Limitations:
Not fine-tuned to be safe and harmless; may contain profanity and texts that are lewd or otherwise offensive., Might produce factually wrong or misleading text
Additional Notes 
The model is based on Llama-2 13B and is known as Metharme in its experimental phase.
Training Details 
Data Sources:
PygmalionAI/PIPPA, Open-Orca/OpenOrca, Norquinal/claude_multiround_chat_30k, jondurbin/airoboros-gpt4-1.4.1, databricks/databricks-dolly-15k
Methodology:
Supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories, and conversations with synthetically generated instructions.
Model Architecture:
Instruction-tuned Llama-2 biased towards fiction writing and conversation.
Input Output 
Input Format:
Prompting using three tokens: `<|system|>`, `<|user|>`, and `<|model|>`
Accepted Modalities:
text
Output Format:
Text
LLM NamePygmalion 2 13B GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/Pygmalion-2-13B-GGUF 
Model NamePygmalion 2 13B
Model CreatorPygmalionAI
Base Model(s)  Pygmalion 2 13B   PygmalionAI/pygmalion-2-13b
Model Size13b
Required VRAM5.4 GB
Updated2025-02-05
MaintainerTheBloke
Model Typellama
Instruction-BasedYes
Model Files  5.4 GB   6.9 GB   6.3 GB   5.7 GB   7.4 GB   7.9 GB   7.4 GB   9.0 GB   9.2 GB   9.0 GB   10.7 GB   13.8 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licensellama2

Best Alternatives to Pygmalion 2 13B GGUF

Best Alternatives
Context / RAM
Downloads
Likes
Llama 3 13B Instruct V0.1 GGUF0K / 5.1 GB4705
...aMa 3 Instruct Zeroed 13B GGUF0K / 5 GB111
Codellama 7B Instruct GGUF0K / 2.8 GB1000
Codellama 13B Instruct GGUF0K / 13.8 GB50
Medicine LLM 13B GGUF0K / 5.4 GB196515
Finance LLM 13B GGUF0K / 4.8 GB62317
CodeLlama 13B Instruct GGUF0K / 5.4 GB5127119
Law LLM 13B GGUF0K / 5.4 GB3577
Swallow 13B Instruct GGUF0K / 5.5 GB2445
Mythalion 13B GGUF0K / 5.4 GB144165
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Pygmalion-2-13B-GGUF.

Rank the Pygmalion 2 13B GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227