Selfrag Zh Baichuan2 7B Chat by Aman

 ยป  All LLMs  ยป  Aman  ยป  Selfrag Zh Baichuan2 7B Chat   URL Share it on

  Autotrain compatible   Baichuan   Baichuan2   Custom code   Endpoints compatible   Pytorch   Rag   Region:us   Sharded

Selfrag Zh Baichuan2 7b Chat Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Selfrag Zh Baichuan2 7B Chat (Aman/selfrag-zh_baichuan2_7b_chat)

Selfrag Zh Baichuan2 7B Chat Parameters and Internals

Model Type 
Text Generation, RAG
Additional Notes 
The model aligns reflection tokens with the English original, and includes a 'critic model' that requires further refinement based on critic data quality.
Supported Languages 
Chinese (High proficiency)
Training Details 
Data Sources:
belle SFT data, Wikipedia Chinese docs
Data Volume:
4w.jsonl
Methodology:
Self-RAG based training
Model Architecture:
7B parameter model architecture
Input Output 
Input Format:
Text prompts with optional retrieval-enhanced input
Accepted Modalities:
Text
Output Format:
Text sequences
Performance Tips:
Use original generate method of transformers for best accuracy
LLM NameSelfrag Zh Baichuan2 7b Chat
Repository ๐Ÿค—https://huggingface.co/Aman/selfrag-zh_baichuan2_7b_chat 
Model Size7b
Required VRAM15 GB
Updated2025-02-22
MaintainerAman
Model Typebaichuan
Model Files  9.9 GB: 1-of-2   5.1 GB: 2-of-2
Model ArchitectureBaichuanForCausalLM
Licensemit
Context Length4096
Model Max Length4096
Transformers Version4.33.0
Tokenizer ClassBaichuanTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size125711
Torch Data Typebfloat16

Best Alternatives to Selfrag Zh Baichuan2 7B Chat

Best Alternatives
Context / RAM
Downloads
Likes
Baichuan2 7B PoSE Linear 16K16K / 15 GB181
Baichuan2 7B PoSE NTK 16K16K / 15 GB161
Baichuan2 7B PoSE YaRN 16K16K / 15 GB151
Baichuan2 7B Chat4K / 15 GB16802165
DocLLM Baichuan2 7b4K / 18.3 GB2025
Baichuan2 7B Base4K / 15 GB578679
Blossom V4 Baichuan2 7B4K / 15 GB161
HuatuoGPT2 7B4K / 15 GB1316
HuatuoGPT2 7B 4bits4K / 5.4 GB175
HuatuoGPT2 7B 8bits4K / 8.7 GB62
Note: green Score (e.g. "73.2") means that the model is better than Aman/selfrag-zh_baichuan2_7b_chat.

Rank the Selfrag Zh Baichuan2 7B Chat Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227