Mpt 30B Instruct Peft Compatible by eluzhnica

 ยป  All LLMs  ยป  eluzhnica  ยป  Mpt 30B Instruct Peft Compatible   URL Share it on

  Arxiv:2108.12409   Arxiv:2205.14135   Autotrain compatible   Composer   Custom code   Instruct   Llm-foundry   Mosaicml   Mpt   Pytorch   Region:us   Sharded

Mpt 30B Instruct Peft Compatible Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mpt 30B Instruct Peft Compatible (eluzhnica/mpt-30b-instruct-peft-compatible)

Mpt 30B Instruct Peft Compatible Parameters and Internals

Model Type 
text generation
Additional Notes 
The model can produce factually incorrect outputs and may generate lewd, biased, or offensive content.
Training Details 
Data Sources:
Dolly HHRLHF, Competition Math, Duorc, CoT GSM8k, Qasper, Quality, Summ Screen FD, Spider
Context Length:
8192
Training Time:
8 hours
Hardware Used:
72 A100 40GB GPUs
Model Architecture:
Modified decoder-only transformer with FlashAttention and ALiBi
LLM NameMpt 30B Instruct Peft Compatible
Repository ๐Ÿค—https://huggingface.co/eluzhnica/mpt-30b-instruct-peft-compatible 
Model Size30b
Required VRAM60.1 GB
Updated2024-12-22
Maintainereluzhnica
Model Typempt
Instruction-BasedYes
Model Files  9.8 GB: 1-of-7   9.9 GB: 2-of-7   9.9 GB: 3-of-7   9.9 GB: 4-of-7   9.9 GB: 5-of-7   9.9 GB: 6-of-7   0.8 GB: 7-of-7
Model ArchitectureMPTForCausalLM
Licensecc-by-sa-3.0
Model Max Length8192
Transformers Version4.28.1
Tokenizer ClassGPTNeoXTokenizer
Vocabulary Size50432
Torch Data Typebfloat16

Best Alternatives to Mpt 30B Instruct Peft Compatible

Best Alternatives
Context / RAM
Downloads
Likes
Mpt 30B Instruct0K / 60.1 GB1263101
Ct2fast Mpt 30B Instruct0K / 30 GB94
Ct2fast Mpt 30B Chat0K / 30 GB102
Mpt 30B Chat Q80K / 30.4 GB191
Mpt 30B Instruct Q80K / 30.4 GB165
...l Mpt 30B Instruct W4 G128 AWQ0K / 16.1 GB82
Note: green Score (e.g. "73.2") means that the model is better than eluzhnica/mpt-30b-instruct-peft-compatible.

Rank the Mpt 30B Instruct Peft Compatible Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217