Dbrx Base Converted V2 by LnL-AI

 ยป  All LLMs  ยป  LnL-AI  ยป  Dbrx Base Converted V2   URL Share it on

  Arxiv:2211.15841   Arxiv:2304.11277   Autotrain compatible   Custom code   Dbrx   Endpoints compatible   Region:us   Safetensors   Sharded   Tensorflow

Dbrx Base Converted V2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dbrx Base Converted V2 (LnL-AI/dbrx-base-converted-v2)

Dbrx Base Converted V2 Parameters and Internals

Model Type 
transformer-based, decoder-only, mixture-of-experts (MoE)
Use Cases 
Areas:
commercial, research
Applications:
text completion
Primary Use Cases:
general English-language, coding tasks
Limitations:
not instruction fine-tuned, not trained for interactive chat, not intended for non-English languages
Considerations:
Should use retrieval augmented generation (RAG) for scenarios where accuracy and fidelity are important.
Additional Notes 
This version of DBRX is not instruction finetuned and should be used only as a completion model.
Training Details 
Data Sources:
text, code data
Data Volume:
12T tokens
Methodology:
fine-grained mixture-of-experts (MoE) architecture, rotary position encodings (RoPE), gated linear units (GLU), grouped query attention (GQA), curriculum learning
Context Length:
32768
Model Architecture:
132B total parameters, 16 experts, only 36B active at a time
Input Output 
Accepted Modalities:
text
Output Format:
text
Release Notes 
Version:
1.0
Notes:
DBRX Base, a pretrained base model, released alongside DBRX Instruct which is a fine-tuned version.
LLM NameDbrx Base Converted V2
Repository ๐Ÿค—https://huggingface.co/LnL-AI/dbrx-base-converted-v2 
Model Size131.6b
Required VRAM181.1 GB
Updated2025-02-22
MaintainerLnL-AI
Model Typedbrx
Model Files  3.5 GB: 1-of-61   4.4 GB: 2-of-61   4.2 GB: 3-of-61   4.4 GB: 4-of-61   4.4 GB: 5-of-61   4.2 GB: 6-of-61   4.4 GB: 7-of-61   4.4 GB: 8-of-61   4.2 GB: 9-of-61   4.4 GB: 10-of-61   4.4 GB: 11-of-61   4.2 GB: 12-of-61   4.4 GB: 13-of-61   4.4 GB: 14-of-61   4.2 GB: 15-of-61   4.4 GB: 16-of-61   4.4 GB: 17-of-61   4.2 GB: 18-of-61   4.4 GB: 19-of-61   4.4 GB: 20-of-61   4.2 GB: 21-of-61   4.4 GB: 22-of-61   4.4 GB: 23-of-61   4.2 GB: 24-of-61   4.4 GB: 25-of-61   4.4 GB: 26-of-61   4.2 GB: 27-of-61   4.4 GB: 28-of-61   4.4 GB: 29-of-61   4.2 GB: 30-of-61   4.4 GB: 31-of-61   4.4 GB: 32-of-61   4.2 GB: 33-of-61   4.4 GB: 34-of-61   4.4 GB: 35-of-61   4.2 GB: 36-of-61   4.4 GB: 37-of-61   4.4 GB: 38-of-61   4.2 GB: 39-of-61   4.4 GB: 40-of-61   4.4 GB: 41-of-61   4.2 GB: 42-of-61
Model ArchitectureDbrxForCausalLM
Licenseother
Transformers Version4.38.2
Vocabulary Size100352
Torch Data Typebfloat16

Best Alternatives to Dbrx Base Converted V2

Best Alternatives
Context / RAM
Downloads
Likes
Dbrx Instruct0K / 181.1 GB132871110
...lphin 2.9.1 Dbrx Llamacppfixed0K / 186.6 GB82
Dolphin 2.9.1 Dbrx0K / 215.9 GB118
Dbrx Base0K / 185.5 GB19528
Dbrx Instruct0K / 181.1 GB2220
...rx Instruct Quantization Fixed0K / 176.9 GB159
Dbrx Base Fixed0K / 206.1 GB156
Dbrx Base Quantization Fixed0K / 181.1 GB71
Note: green Score (e.g. "73.2") means that the model is better than LnL-AI/dbrx-base-converted-v2.

Rank the Dbrx Base Converted V2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227