Firefly Mixtral 8x7b by YeungNLP

 ยป  All LLMs  ยป  YeungNLP  ยป  Firefly Mixtral 8x7b   URL Share it on

  Autotrain compatible   En   Endpoints compatible   License:apache-2.0   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

Rank the Firefly Mixtral 8x7b Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Firefly Mixtral 8x7b (YeungNLP/firefly-mixtral-8x7b)

Quantized Models of the Firefly Mixtral 8x7b

Model
Likes
Downloads
VRAM
Firefly Mixtral 8x7b GGUF1016715 GB
Firefly Mixtral 8x7b GPTQ3223 GB
Firefly Mixtral 8x7b AWQ2224 GB

Best Alternatives to Firefly Mixtral 8x7b

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Mixtral 8x7B V0.177.9532K / 93.6 GB10464721587
Mixtral 8x7B Instruct V0.177.7532K / 93.6 GB5291863945
...lQA Mixtral 8x7B Instruct V0.132K / 43.3 GB52
Mixtral 8x7B V0.1 Fp832K / 47 GB230
Mixtral 8x7B Instruct V0.1 FP832K / 47.1 GB2371
...tral 8x7B Instruct V0.1 FP8 V332K / 47.1 GB360
...tral 8x7B Instruct V0.1 FP8 V232K / 47.1 GB110
...tral 8x7B Instruct V0.1 FP8 V132K / 47.1 GB60
Aldan Mix 8x7B32K / 89.4 GB11
Taiwan LLM 8x7B DPO32K / 90 GB57418
Note: green Score (e.g. "73.2") means that the model is better than YeungNLP/firefly-mixtral-8x7b.

Firefly Mixtral 8x7b Parameters and Internals

LLM NameFirefly Mixtral 8x7b
RepositoryOpen on ๐Ÿค— 
Model Size46.7b
Required VRAM93.6 GB
Updated2024-07-07
MaintainerYeungNLP
Model Typemixtral
Model Files  4.9 GB: 1-of-19   5.0 GB: 2-of-19   5.0 GB: 3-of-19   4.9 GB: 4-of-19   5.0 GB: 5-of-19   5.0 GB: 6-of-19   4.9 GB: 7-of-19   5.0 GB: 8-of-19   5.0 GB: 9-of-19   4.9 GB: 10-of-19   5.0 GB: 11-of-19   5.0 GB: 12-of-19   5.0 GB: 13-of-19   4.9 GB: 14-of-19   5.0 GB: 15-of-19   5.0 GB: 16-of-19   4.9 GB: 17-of-19   5.0 GB: 18-of-19   4.2 GB: 19-of-19
Supported Languagesen
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.36.1
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat16

What open-source LLMs or SLMs are you in search of? 34531 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801