Platypus2 13B GGML by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Platypus2 13B GGML   URL Share it on

  Arxiv:2307.09288   Arxiv:2308.07317 Base model:finetune:garage-bai... Base model:garage-baind/platyp... Dataset:garage-baind/open-plat...   En   Ggml   Llama   Quantized   Region:us

Platypus2 13B GGML Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Platypus2 13B GGML (TheBloke/Platypus2-13B-GGML)

Platypus2 13B GGML Parameters and Internals

Model Type 
auto-regressive language model, instruction fine-tuned
Use Cases 
Areas:
research, commercial applications
Applications:
instruction-based tasks, storytelling
Primary Use Cases:
text generation tasks
Limitations:
English language models may not cover all scenarios
Considerations:
Developers should perform safety testing and tuning tailored to their specific applications of the model.
Additional Notes 
Quantized versions and different formats provide flexibility in terms of performance and resource requirements.
Supported Languages 
English (fluency)
Training Details 
Data Sources:
garage-bAInd/Open-Platypus
Methodology:
fine-tuned using LoRA on 1 A100 80GB GPU
Hardware Used:
1 A100 80GB GPU
Model Architecture:
LLaMA2 transformer architecture
Safety Evaluation 
Risk Categories:
inaccuracy, bias
Ethical Considerations:
Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. Potential outputs cannot be predicted in advance.
Responsible Ai Considerations 
Fairness:
Potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses.
Transparency:
Refer to the Responsible Use Guide for transparency actions.
Accountability:
Developers should perform safety testing and tuning tailored to their specific applications of the model.
Mitigation Strategies:
Safety testing and tuning before deploying any applications.
Input Output 
Input Format:
Instruction-response format
Accepted Modalities:
text
Output Format:
Textual responses
LLM NamePlatypus2 13B GGML
Repository ๐Ÿค—https://huggingface.co/TheBloke/Platypus2-13B-GGML 
Model NamePlatypus2
Model Creatorgarage-bAInd
Base Model(s)  Platypus2 13B   garage-bAInd/Platypus2-13B
Model Size13b
Required VRAM5.5 GB
Updated2025-02-05
MaintainerTheBloke
Model Typellama
Model Files  5.5 GB   6.9 GB   6.3 GB   5.7 GB   7.4 GB   8.2 GB   7.9 GB   7.4 GB   9.0 GB   9.8 GB   9.2 GB   9.0 GB   10.7 GB   13.8 GB
Supported Languagesen
GGML QuantizationYes
Quantization Typeggml
Model ArchitectureAutoModel
Licensellama2

Best Alternatives to Platypus2 13B GGML

Best Alternatives
Context / RAM
Downloads
Likes
Llama 2 13B Chat GGML0K / 5.5 GB230695
RuGPT 3.5 13B GGML0K / 7.4 GB128
RuGPT 3.5 13B GGML0K / 7.4 GB1215
MythoMax L2 13B GGML0K / 5.5 GB2182
Llama 2 13B GGML0K / 5.5 GB27176
CodeLlama 13B GGML0K / 5.7 GB1433
...rca Platypus WizardLM 13B GGML0K / 5.5 GB56
...penBuddy Llama2 13B V11.1 GGML0K / 5.5 GB65
...zardCoder Python 13B V1.0 GGML0K / 5.7 GB622
Yarn Llama 2 13B 128K GGML0K / 5.5 GB66
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Platypus2-13B-GGML.

Rank the Platypus2 13B GGML Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227