Faro Yi 34B by wenbopan

 ยป  All LLMs  ยป  wenbopan  ยป  Faro Yi 34B   URL Share it on

  Merged Model   Arxiv:2303.08774   Autotrain compatible   Conversational   Dataset:wenbopan/fusang-v1 Dataset:wenbopan/openorca-zh-2...   En   Endpoints compatible   License:mit   Llama   Region:us   Safetensors   Sharded   Tensorflow   Zh

Faro Yi 34B Benchmarks

nn.n% — How the model compares to the GPT-4.

Rank the Faro Yi 34B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Faro Yi 34B (wenbopan/Faro-Yi-34B)

Best Alternatives to Faro Yi 34B

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
UNA SimpleSmaug 34B V1beta77.4132K / 69.2 GB213117
Smaug 34B V0.177.29195K / 69.2 GB664053
Maxine 34B Stock77.28195K / 67.8 GB10483
Luminex 34B V0.277.1932K / 68.9 GB349811
Luminex 34B V0.177.06195K / 68.9 GB19767
Pearl 34B Ties75.48195K / 67.8 GB25053
Bagel Hermes 34B Slerp75.24195K / 68.9 GB35520
HermesBagel 34B V0.175.154K / 68.9 GB24541
Yi 34B X275.02195K / 68.9 GB23700
MetaMath Bagel DPO 34B74.8195K / 69.2 GB251915
Note: green Score (e.g. "73.2") means that the model is better than wenbopan/Faro-Yi-34B.

Faro Yi 34B Parameters and Internals

LLM NameFaro Yi 34B
RepositoryOpen on ๐Ÿค— 
Merged ModelYes
Model Size34b
Required VRAM69.2 GB
Updated2024-04-20
Maintainerwenbopan
Model Typellama
Model Files  4.8 GB: 1-of-15   4.8 GB: 2-of-15   5.0 GB: 3-of-15   4.8 GB: 4-of-15   4.8 GB: 5-of-15   5.0 GB: 6-of-15   4.8 GB: 7-of-15   4.8 GB: 8-of-15   5.0 GB: 9-of-15   4.8 GB: 10-of-15   4.8 GB: 11-of-15   5.0 GB: 12-of-15   4.8 GB: 13-of-15   4.8 GB: 14-of-15   1.2 GB: 15-of-15
Supported Languageszh en
Model ArchitectureLlamaForCausalLM
Licensemit
Context Length200000
Model Max Length200000
Transformers Version4.39.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size64000
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 35526 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20240042001