Yayi 7B by wenge-research

 ยป  All LLMs  ยป  wenge-research  ยป  Yayi 7B   URL Share it on

  Autotrain compatible   Bloom   En   Endpoints compatible   Pytorch   Region:us   Sharded   Yayi   Zh
Model Card on HF ๐Ÿค—: https://huggingface.co/wenge-research/yayi-7b 

Yayi 7B Benchmarks

Yayi 7B (wenge-research/yayi-7b)

Yayi 7B Parameters and Internals

Model Type 
Text Generation
Use Cases 
Areas:
Research
Applications:
Media publicity, Public opinion analysis, Public safety, Financial risk control, Urban governance
Primary Use Cases:
Over a hundred natural language instruction tasks
Limitations:
Factually incorrect responses, Inability to effectively identify harmful instructions, Requires improvement in logical reasoning, code generation, etc.
Considerations:
Ensure responsible and safe usage in accordance with terms.
Additional Notes 
The model is open-sourced to foster collaboration on the development of Chinese pre-trained models.
Supported Languages 
Chinese (Comprehensive)
Training Details 
Data Sources:
Media publicity data, Public opinion analysis data, Public safety data, Financial risk control data, Urban governance data
Data Volume:
Millions of high-quality domain data
Methodology:
Instruction fine-tuning
Hardware Used:
Single GPU such as A100/A800/3090
Model Architecture:
Pre-trained transformer model architecture
Input Output 
Input Format:
Text with instruction prompts
Accepted Modalities:
Text
Output Format:
Generated text responses
Performance Tips:
Ensure eos_token_id is correctly set for generation.
LLM NameYayi 7B
Repository ๐Ÿค—https://huggingface.co/wenge-research/yayi-7b 
Model Size7b
Required VRAM28.2 GB
Updated2024-12-26
Maintainerwenge-research
Model Typebloom
Model Files  14.1 GB: 1-of-2   14.1 GB: 2-of-2
Supported Languageszh en
Model ArchitectureBloomForCausalLM
Transformers Version4.28.1
Tokenizer ClassBloomTokenizer
Padding Token<pad>
Vocabulary Size250684
Torch Data Typebfloat16

Best Alternatives to Yayi 7B

Best Alternatives
Context / RAM
Downloads
Likes
...tral 7B Instruct V0.2 Uld Loss0K / 2.2 GB90
...Qa Llama 2 7B Chat Hf Uld Loss0K / 2.2 GB80
... 7B Instruct V0.2 Text Teacher0K / 2.2 GB80
...lama 2 7B Chat Hf Text Teacher0K / 2.2 GB70
Phoenix Inst Chat 7B0K / 16.2 GB488743
Gogpt 7B Bloom0K / 32.3 GB11483
Vietcuna 7B 2k50K / 14.2 GB110
Bloom Xp30K / 32.4 GB140
Vietcuna 7B V30K / 14.2 GB4928
GrammarGPT0K / 32.3 GB516
Note: green Score (e.g. "73.2") means that the model is better than wenge-research/yayi-7b.

Rank the Yayi 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40248 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217