Mpt 7B Instruct Peft Compatible by eluzhnica

 ยป  All LLMs  ยป  eluzhnica  ยป  Mpt 7B Instruct Peft Compatible   URL Share it on

  Arxiv:2010.04245   Arxiv:2108.12409   Arxiv:2205.14135   Autotrain compatible   Composer   Custom code   Dataset:mosaicml/dolly hhrlhf   Instruct   Llm-foundry   Mosaicml   Mpt   Pytorch   Region:us   Sharded

Mpt 7B Instruct Peft Compatible Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mpt 7B Instruct Peft Compatible (eluzhnica/mpt-7b-instruct-peft-compatible)

Mpt 7B Instruct Peft Compatible Parameters and Internals

Model Type 
text generation
Use Cases 
Limitations:
Can produce factually incorrect output., May generate lewd, biased or otherwise offensive outputs.
Additional Notes 
Inspired by implementations from other repositories for compatibility with PEFT (tested with QLoRA). Not finetuned further from original weights.
Training Details 
Data Sources:
sam-mosaic/dolly_hhrlhf, Databricks Dolly-15k, Anthropic Helpful and Harmless (HH-RLHF)
Methodology:
Fine-tuning
Context Length:
2048
Training Time:
2.3 hours
Hardware Used:
8 A100-40GB GPUs
Model Architecture:
Modified decoder-only transformer with FlashAttention and ALiBi
LLM NameMpt 7B Instruct Peft Compatible
Repository ๐Ÿค—https://huggingface.co/eluzhnica/mpt-7b-instruct-peft-compatible 
Model Size7b
Required VRAM13.3 GB
Updated2024-12-21
Maintainereluzhnica
Model Typempt
Instruction-BasedYes
Model Files  9.9 GB: 1-of-2   3.4 GB: 2-of-2
Model ArchitectureMPTForCausalLM
Licensecc-by-sa-3.0
Model Max Length2048
Transformers Version4.28.1
Tokenizer ClassGPTNeoXTokenizer
Vocabulary Size50432
Torch Data Typebfloat16

Best Alternatives to Mpt 7B Instruct Peft Compatible

Best Alternatives
Context / RAM
Downloads
Likes
Mpt 7B Chat0K / 13.3 GB18663512
Mpt 7B Instruct0K / 13.3 GB8063468
Mpt 7B Int8 Ov0K / 0 GB100
Sea Lion 7B Instruct0K / 15 GB53123
Mpt 7B 8K Instruct0K / 13.3 GB132126
Sea Lion 7B Instruct Research0K / 15 GB4014
Results0K / 13.3 GB160
Mpt 7B 8K Chat Sharded Bf160K / 13.4 GB111
...7B 8K Instruct Peft Compatible0K / 13.3 GB181
Vigogne Mpt 7B Instruct0K / 13.4 GB170
Note: green Score (e.g. "73.2") means that the model is better than eluzhnica/mpt-7b-instruct-peft-compatible.

Rank the Mpt 7B Instruct Peft Compatible Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217