Firefly Mixtral 8x7b GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Firefly Mixtral 8x7b GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:yeungnlp/firefly-mi...   En   Gptq   License:apache-2.0   Mixtral   Moe   Quantized   Region:us   Safetensors

Rank the Firefly Mixtral 8x7b GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Firefly Mixtral 8x7b GPTQ (TheBloke/firefly-mixtral-8x7b-GPTQ)

Best Alternatives to Firefly Mixtral 8x7b GPTQ

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Mixtral 8x7B V0.1 GPTQ68.432K / 23.8 GB1755125
...ixtral 8x7B Instruct V0.1 GPTQ68.232K / 23.8 GB689581127
Dolphin 2.5 Mixtral 8x7b GPTQ32K / 23.8 GB22792
...Hermes 2 Mixtral 8x7B DPO GPTQ32K / 23.8 GB1171025
Dolphin 2.7 Mixtral 8x7b GPTQ32K / 23.8 GB32918
Mixtral SlimOrca 8x7B GPTQ32K / 23.8 GB3511
...Hermes 2 Mixtral 8x7B SFT GPTQ32K / 23.8 GB2910
...nthia MoE V3 Mixtral 8x7B GPTQ32K / 23.8 GB1510
Open Gpt4 8x7B GPTQ32K / 23.8 GB139
...maid V0.1 Mixtral 8x7b V3 GPTQ32K / 23.8 GB748
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/firefly-mixtral-8x7b-GPTQ.

Firefly Mixtral 8x7b GPTQ Parameters and Internals

LLM NameFirefly Mixtral 8x7b GPTQ
RepositoryOpen on ๐Ÿค— 
Model NameFirefly Mixtral 8X7B
Model CreatorYeungNLP
Base Model(s)  Firefly Mixtral 8x7b   YeungNLP/firefly-mixtral-8x7b
Model Size6.1b
Required VRAM23.8 GB
Updated2024-07-04
MaintainerTheBloke
Model Typemixtral
Model Files  23.8 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat16

What open-source LLMs or SLMs are you in search of? 33742 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801