Yi 34Bx2 MoE 60B GGUF by second-state

 ยป  All LLMs  ยป  second-state  ยป  Yi 34Bx2 MoE 60B GGUF   URL Share it on

  Autotrain compatible Base model:cloudyu/yi-34bx2-mo... Base model:quantized:cloudyu/y...   Conversational   Endpoints compatible   Fp16   Gguf   Mixtral   Moe   Q2   Q8 0   Quantized   Region:us

Yi 34Bx2 MoE 60B GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Yi 34Bx2 MoE 60B GGUF (second-state/Yi-34Bx2-MoE-60B-GGUF)

Yi 34Bx2 MoE 60B GGUF Parameters and Internals

Model Type 
mistral
Additional Notes 
Quantized by Second State Inc. Quantization methods include Q2_K, Q3_K_L, Q3_K_M, Q3_K_S, Q4_0, Q4_K_M, Q4_K_S, Q5_0, Q5_K_M, Q5_K_S, Q6_K, Q8_0, and f16.
LLM NameYi 34Bx2 MoE 60B GGUF
Repository ๐Ÿค—https://huggingface.co/second-state/Yi-34Bx2-MoE-60B-GGUF 
Model NameYi 34Bx2 MoE 60B
Model Creatorcloudyu
Base Model(s)  Yi 34Bx2 MoE 60B   cloudyu/Yi-34Bx2-MoE-60B
Model Size60b
Required VRAM0.3 GB
Updated2025-02-05
Maintainersecond-state
Model Typemistral
Model Files  22.4 GB   31.8 GB   29.2 GB   26.3 GB   34.3 GB   36.7 GB   34.6 GB   41.9 GB   43.1 GB   41.9 GB   49.9 GB   32.2 GB: 1-of-3   32.1 GB: 2-of-3   0.3 GB: 3-of-3   31.9 GB: 1-of-8   31.7 GB: 2-of-8   31.7 GB: 3-of-8   31.7 GB: 4-of-8   31.7 GB: 5-of-8   31.7 GB: 6-of-8   31.7 GB: 7-of-8   21.1 GB: 8-of-8
GGUF QuantizationYes
Quantization Typefp16|gguf|q2|q8_0|q4_k|q5_k
Model ArchitectureMixtralForCausalLM
Licensecc-by-nc-4.0
Context Length200000
Model Max Length200000
Transformers Version4.36.2
Vocabulary Size64000
Torch Data Typebfloat16

Best Alternatives to Yi 34Bx2 MoE 60B GGUF

Best Alternatives
Context / RAM
Downloads
Likes
... 34Bx2 MoE 60B 2.65bpw H6 EXL2195K / 21.2 GB61
...i 34Bx2 MoE 60B 5.0bpw H6 EXL2195K / 38.8 GB51
...i 34Bx2 MoE 60B 6.0bpw H6 EXL2195K / 46.2 GB41
...l 34Bx2 MoE 60B 8.0bpw H8 EXL2195K / 61.3 GB42
...l 34Bx2 MoE 60B 2.4bpw H6 EXL2195K / 19.3 GB53
... 34Bx2 MoE 60B 4.65bpw H6 EXL2195K / 36.2 GB41
...l 34Bx2 MoE 60B EXL2 2.0bpw H8195K / 18.1 GB71
... 34Bx2 MoE 60B 2.65bpw H6 EXL2195K / 21.2 GB42
...l 34Bx2 MoE 60B 3.0bpw H6 EXL2195K / 23.8 GB82
...l 34Bx2 MoE 60B 4.0bpw H6 EXL2195K / 31.3 GB52

Rank the Yi 34Bx2 MoE 60B GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227