Mixtralnt 4x7b Test by chargoddard

 ยป  All LLMs  ยป  chargoddard  ยป  Mixtralnt 4x7b Test   URL Share it on

  Autotrain compatible   Endpoints compatible   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

Mixtralnt 4x7b Test Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mixtralnt 4x7b Test (chargoddard/mixtralnt-4x7b-test)

Mixtralnt 4x7b Test Parameters and Internals

Model Type 
Transformers
Additional Notes 
The model architecture uses a mixture of experts (MoE) approach by combining existing Mistral models.
Input Output 
Input Format:
maybe alpaca??? or chatml???
LLM NameMixtralnt 4x7b Test
Repository ๐Ÿค—https://huggingface.co/chargoddard/mixtralnt-4x7b-test 
Model Size24.2b
Required VRAM48.5 GB
Updated2025-02-22
Maintainerchargoddard
Model Typemixtral
Model Files  4.9 GB: 1-of-10   5.0 GB: 2-of-10   5.0 GB: 3-of-10   4.9 GB: 4-of-10   5.0 GB: 5-of-10   5.0 GB: 6-of-10   5.0 GB: 7-of-10   5.0 GB: 8-of-10   5.0 GB: 9-of-10   3.7 GB: 10-of-10
Model ArchitectureMixtralForCausalLM
Licensecc-by-nc-4.0
Context Length32768
Model Max Length32768
Transformers Version4.36.0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typebfloat16

Quantized Models of the Mixtralnt 4x7b Test

Model
Likes
Downloads
VRAM
Mixtralnt 4x7b Test GGUF193048 GB
Mixtralnt 4x7b Test AWQ11713 GB
Mixtralnt 4x7b Test GPTQ53012 GB

Best Alternatives to Mixtralnt 4x7b Test

Best Alternatives
Context / RAM
Downloads
Likes
Dzakwan MoE 4x7b Beta32K / 48.4 GB38440
Beyonder 4x7B V332K / 48.3 GB394158
Calme 4x7B MoE V0.232K / 48.3 GB56362
Proto Athena 4x7B32K / 48.4 GB150
Proto Athena V0.2 4x7B32K / 48.4 GB80
Mera Mix 4x7B32K / 48.3 GB352518
Calme 4x7B MoE V0.132K / 48.3 GB39512
CognitiveFusion2 4x7B BF1632K / 48.3 GB36993
MixtureofMerges MoE 4x7b V532K / 48.3 GB19741
MixtureofMerges MoE 4x7b V432K / 48.3 GB19914
Note: green Score (e.g. "73.2") means that the model is better than chargoddard/mixtralnt-4x7b-test.

Rank the Mixtralnt 4x7b Test Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43508 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227