Chinese Text Correction 1.5B by shibing624

 ยป  All LLMs  ยป  shibing624  ยป  Chinese Text Correction 1.5B   URL Share it on

  Autotrain compatible Base model:finetune:qwen/qwen2... Base model:qwen/qwen2.5-1.5b-i...   Conversational Dataset:shibing624/chinese tex...   Endpoints compatible   Instruct   Qwen2   Region:us   Safetensors   Zh

Chinese Text Correction 1.5B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Chinese Text Correction 1.5B (shibing624/chinese-text-correction-1.5b)

Chinese Text Correction 1.5B Parameters and Internals

Model Type 
text correction
Use Cases 
Areas:
spelling correction, grammar correction
Primary Use Cases:
Chinese Text Correction (CTC), Chinese Spelling Correction (CSC)
Additional Notes 
The model is part of the pycorrector project and can be used with or without it.
Supported Languages 
zh (proficient)
Training Details 
Data Sources:
shibing624/chinese_text_correction
Training Time:
9 days 8 hours
Hardware Used:
Tesla V100, 32 GB VRAM
Input Output 
Input Format:
text input through encoder
Accepted Modalities:
text
Output Format:
corrected text output
LLM NameChinese Text Correction 1.5B
Repository ๐Ÿค—https://huggingface.co/shibing624/chinese-text-correction-1.5b 
Base Model(s)  Qwen/Qwen2.5-1.5B-Instruct   Qwen/Qwen2.5-1.5B-Instruct
Model Size1.5b
Required VRAM3.1 GB
Updated2025-02-22
Maintainershibing624
Model Typeqwen2
Instruction-BasedYes
Model Files  3.1 GB
Supported Languageszh
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Chinese Text Correction 1.5B

Best Alternatives
Context / RAM
Downloads
Likes
AceInstruct 1.5B128K / 3.5 GB59217
Bellatrix 1.5B XElite128K / 3.5 GB2879
Gte Qwen2 1.5B Instruct128K / 7.1 GB451232171
... Abliterated TIES Qwen2.5 1.5B128K / 3.5 GB700
Saba1 1.8B128K / 3.6 GB681
Miniclaus Qw1.5B UNAMGS128K / 3.5 GB1728
EVA Qwen2.5 1.5B V0.0128K / 3.1 GB5213
Replete Coder Qwen2 1.5B128K / 3.1 GB23323
Qwen2 1.5B Instruct Refine128K / 3.1 GB1410
Samantha Qwen2 1.5B128K / 3.1 GB1440
Note: green Score (e.g. "73.2") means that the model is better than shibing624/chinese-text-correction-1.5b.

Rank the Chinese Text Correction 1.5B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227