Dbrx Instruct Quantization Fixed by SinclairSchneider

 ยป  All LLMs  ยป  SinclairSchneider  ยป  Dbrx Instruct Quantization Fixed   URL Share it on

  Arxiv:2211.15841   Arxiv:2304.11277   Autotrain compatible   Custom code   Dbrx   Instruct   Region:us   Safetensors   Sharded   Tensorflow

Dbrx Instruct Quantization Fixed Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dbrx Instruct Quantization Fixed (SinclairSchneider/dbrx-instruct-quantization-fixed)

Dbrx Instruct Quantization Fixed Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
commercial, research
Primary Use Cases:
few-turn question answering, general English-language and coding tasks
Limitations:
Not intended for non-English languages, Doesn't support native code execution, Not suitable for use that violates laws
Considerations:
Users should perform additional safety testing in their application domain.
Additional Notes 
Includes instruction tuning for DBRX Instruct.
Supported Languages 
English (high)
Training Details 
Data Volume:
12T tokens
Methodology:
Next-token prediction using fine-grained MoE architecture
Context Length:
32768
Model Architecture:
Transformer-based decoder-only with fine-grained MoE and 132B parameters, 36B active
Responsible Ai Considerations 
Fairness:
DBRX should be considered primarily for general text-based use in English.
Transparency:
Technical details are provided in the blog post.
Accountability:
Databricks
Mitigation Strategies:
RAG (Retrieval Augmented Generation) recommended for important tasks.
Input Output 
Input Format:
Text-based inputs, up to 32768 tokens
Accepted Modalities:
text
Output Format:
Text-based outputs
Performance Tips:
Use FlashAttention2 for faster GPU inference where supported.
Release Notes 
Version:
1.0
Notes:
Adjusted for 4 and 8-bit loading. Available under open license for broader use.
LLM NameDbrx Instruct Quantization Fixed
Repository ๐Ÿค—https://huggingface.co/SinclairSchneider/dbrx-instruct-quantization-fixed 
Model Size131.6b
Required VRAM176.9 GB
Updated2025-02-22
MaintainerSinclairSchneider
Model Typedbrx
Instruction-BasedYes
Model Files  3.5 GB: 1-of-61   4.4 GB: 2-of-61   4.2 GB: 3-of-61   4.4 GB: 4-of-61   4.4 GB: 5-of-61   4.2 GB: 6-of-61   4.4 GB: 7-of-61   4.4 GB: 8-of-61   4.2 GB: 9-of-61   4.4 GB: 10-of-61   4.4 GB: 11-of-61   4.2 GB: 12-of-61   4.4 GB: 13-of-61   4.4 GB: 14-of-61   4.2 GB: 15-of-61   4.4 GB: 16-of-61   4.4 GB: 17-of-61   4.2 GB: 18-of-61   4.4 GB: 19-of-61   4.4 GB: 20-of-61   4.2 GB: 21-of-61   4.4 GB: 22-of-61   4.4 GB: 23-of-61   4.2 GB: 24-of-61   4.4 GB: 25-of-61   4.4 GB: 26-of-61   4.2 GB: 27-of-61   4.4 GB: 28-of-61   4.4 GB: 29-of-61   4.2 GB: 30-of-61   4.4 GB: 31-of-61   4.4 GB: 32-of-61   4.2 GB: 33-of-61   4.4 GB: 34-of-61   4.4 GB: 35-of-61   4.2 GB: 36-of-61   4.4 GB: 37-of-61   4.4 GB: 38-of-61   4.2 GB: 39-of-61   4.4 GB: 40-of-61   4.4 GB: 41-of-61
Model ArchitectureDbrxForCausalLM
Licenseother
Transformers Version4.38.2
Vocabulary Size100352
Torch Data Typebfloat16

Best Alternatives to Dbrx Instruct Quantization Fixed

Best Alternatives
Context / RAM
Downloads
Likes
Dbrx Instruct0K / 181.1 GB132871110
Dbrx Instruct0K / 181.1 GB2220
Note: green Score (e.g. "73.2") means that the model is better than SinclairSchneider/dbrx-instruct-quantization-fixed.

Rank the Dbrx Instruct Quantization Fixed Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227