Mpt 7B Instruct Q8 by Abzu

 ยป  All LLMs  ยป  Abzu  ยป  Mpt 7B Instruct Q8   URL Share it on

  Arxiv:2010.04245   Arxiv:2108.12409   Arxiv:2205.14135   8-bit   Autotrain compatible   Composer   Custom code   Dataset:mosaicml/dolly hhrlhf   Instruct   Llm-foundry   Mosaicml   Mpt   Q8   Quantized   Region:us   Safetensors
Model Card on HF ๐Ÿค—: https://huggingface.co/Abzu/mpt-7b-instruct-q8 

Mpt 7B Instruct Q8 Benchmarks

Mpt 7B Instruct Q8 (Abzu/mpt-7b-instruct-q8)

Mpt 7B Instruct Q8 Parameters and Internals

Model Type 
Instruction following
Use Cases 
Primary Use Cases:
Short-form instruction following
Limitations:
Can produce factually incorrect, lewd, biased, or offensive outputs.
Considerations:
The model should not be relied on to produce factually accurate information.
Training Details 
Data Sources:
Databricks Dolly-15k, Anthropic Helpful and Harmless (HH-RLHF)
Context Length:
2048
Training Time:
2.3 hours
Hardware Used:
8 A100-40GB GPUs
Model Architecture:
Modified decoder-only transformer
Input Output 
Input Format:
Dolly-15k format
Accepted Modalities:
text
Output Format:
text
LLM NameMpt 7B Instruct Q8
Repository ๐Ÿค—https://huggingface.co/Abzu/mpt-7b-instruct-q8 
Base Model(s)  Mpt 7B Instruct   mosaicml/mpt-7b-instruct
Model Size7b
Required VRAM6.9 GB
Updated2024-12-21
MaintainerAbzu
Model Typempt
Instruction-BasedYes
Model Files  6.9 GB
Quantization Typeq8
Model ArchitectureMPTForCausalLM
Licensecc-by-sa-3.0
Model Max Length2048
Transformers Version4.30.2
Tokenizer ClassGPTNeoXTokenizer
Vocabulary Size50432
Torch Data Typebfloat16

Best Alternatives to Mpt 7B Instruct Q8

Best Alternatives
Context / RAM
Downloads
Likes
Mpt 7B Chat Q80K / 6.9 GB221
Mpt 7B Chat0K / 13.3 GB18663512
Mpt 7B Instruct0K / 13.3 GB8063468
Mpt 7B Int8 Ov0K / 0 GB100
Sea Lion 7B Instruct0K / 15 GB53123
Sea Lion 7B Instruct Gptq0K / 5.5 GB80
Mpt 7B 8K Instruct0K / 13.3 GB132126
Sea Lion 7B Instruct Research0K / 15 GB4014
Mpt Instruct DotNet S0K / 4.6 GB12761
Mpt 7B 8K Chat Gptq0K / 3.8 GB162
Note: green Score (e.g. "73.2") means that the model is better than Abzu/mpt-7b-instruct-q8.

Rank the Mpt 7B Instruct Q8 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217