Mpt 7B 8K Instruct by mosaicml

 ยป  All LLMs  ยป  mosaicml  ยป  Mpt 7B 8K Instruct   URL Share it on

  Arxiv:2010.04245   Arxiv:2108.12409   Arxiv:2205.14135   Autotrain compatible   Composer   Custom code   Dataset:competition math   Dataset:duorc   Dataset:emozilla/quality   Dataset:knkarthick/dialogsum   Dataset:mosaicml/dolly hhrlhf   Dataset:scrolls/summ screen fd   Dataset:spider   Ext 8k   Instruct   Llm-foundry   Mosaicml   Mpt   Pytorch   Region:us   Sharded

Mpt 7B 8K Instruct Benchmarks

Mpt 7B 8K Instruct (mosaicml/mpt-7b-8k-instruct)

Mpt 7B 8K Instruct Parameters and Internals

Model Type 
Long-form instruction following, Question-answering, Summarization
Use Cases 
Areas:
Research, Commercial applications
Limitations:
Can produce factually incorrect output, Possibility to generate lewd, biased or offensive outputs
Considerations:
This model should not be relied on to produce factually accurate information.
Additional Notes 
Propensity to produce factually incorrect and possibly harmful outputs.
Training Details 
Data Sources:
competition_math, duorc, cot_gsm8k, qasper, quality, summ_screen_fd, spider, mosaicml/dolly_hhrlhf, knkarthick/dialogsum
Data Volume:
Approximately 62.65M tokens in total
Context Length:
2048
Training Time:
About 6.3 hours on 8 80GB A100s
Hardware Used:
8 x 80GB A100 GPUs
Model Architecture:
Modified from a standard decoder-only transformer with FlashAttention, ALiBi, and no positional embeddings or biases
Input Output 
Input Format:
Text prompts
Accepted Modalities:
Text
Output Format:
Text
Performance Tips:
Use the torch.autocast context manager when running Torch modules in lower precision.
LLM NameMpt 7B 8K Instruct
Repository ๐Ÿค—https://huggingface.co/mosaicml/mpt-7b-8k-instruct 
Model Size7b
Required VRAM13.3 GB
Updated2025-02-22
Maintainermosaicml
Model Typempt
Instruction-BasedYes
Model Files  9.9 GB: 1-of-2   3.4 GB: 2-of-2
Context Length8k
Model ArchitectureMPTForCausalLM
Licenseapache-2.0
Model Max Length8192
Transformers Version4.30.2
Tokenizer ClassGPTNeoXTokenizer
Vocabulary Size50432
Torch Data Typebfloat16

Best Alternatives to Mpt 7B 8K Instruct

Best Alternatives
Context / RAM
Downloads
Likes
Mpt 7B Chat0K / 13.3 GB95245512
Mpt 7B Instruct0K / 13.3 GB15434469
Mpt 7B Int8 Ov0K / 0 GB270
Sea Lion 7B Instruct0K / 15 GB35723
Sea Lion 7B Instruct Research0K / 15 GB5614
Results0K / 13.3 GB60
...7B 8K Instruct Peft Compatible0K / 13.3 GB171
Mpt 7B 8K Chat Sharded Bf160K / 13.4 GB71
Vigogne Mpt 7B Instruct0K / 13.4 GB80
...pt 7B Instruct Peft Compatible0K / 13.3 GB430
Note: green Score (e.g. "73.2") means that the model is better than mosaicml/mpt-7b-8k-instruct.

Rank the Mpt 7B 8K Instruct Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227