Bert Chinese L 12 H 768 A 12 by Tongjilibo

 ยป  All LLMs  ยป  Tongjilibo  ยป  Bert Chinese L 12 H 768 A 12   URL Share it on

  Endpoints compatible   Pytorch   Region:us

Bert Chinese L 12 H 768 A 12 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Bert Chinese L 12 H 768 A 12 (Tongjilibo/bert-chinese_L-12_H-768_A-12)

Bert Chinese L 12 H 768 A 12 Parameters and Internals

Model Type 
Text Embedding, Transformer
Use Cases 
Areas:
Research, Natural Language Processing
Applications:
Syntactic and semantic text analysis, Text embedding generation
Primary Use Cases:
Chinese text embedding for natural language understanding tasks
Additional Notes 
The BERT 'chinese_L-12_H-768_A-12' and 'bert-base-chinese' weights are reportedly the same.
Supported Languages 
Chinese (Proficient)
Training Details 
Data Sources:
Chinese datasets
Methodology:
Transformer architecture
Context Length:
512
Model Architecture:
12-layer, 768-hidden, 12-heads BERT architecture
Input Output 
Input Format:
Tokenized input sequences
Accepted Modalities:
text
Output Format:
Text embeddings
Performance Tips:
Use pre-trained models for enhanced language understanding
LLM NameBert Chinese L 12 H 768 A 12
Repository ๐Ÿค—https://huggingface.co/Tongjilibo/bert-chinese_L-12_H-768_A-12 
Required VRAM0.4 GB
Updated2025-01-20
MaintainerTongjilibo
Model Files  0.4 GB
Model ArchitectureAutoModel
Licenseapache-2.0
Context Length512
Model Max Length512
Vocabulary Size21128

Best Alternatives to Bert Chinese L 12 H 768 A 12

Best Alternatives
Context / RAM
Downloads
Likes
Distil Longformer Base 40964K / 0.4 GB80
Daedalus 11K /  GB131
Tiny Random Detr1K / 0.2 GB50
Opengpt2 Pytorch Backward1K / 6 GB191
Opengpt2 Pytorch Forward1K / 6 GB91
Finsent Transformer0.5K / 0.4 GB41
Simbert Chinese Base0.5K / 0.4 GB160
Simbert Chinese Tiny0.5K / 0 GB110
All MiniLM L12 V20.5K /  GB6183
Bert Tiny0.5K / 0 GB592275103
Note: green Score (e.g. "73.2") means that the model is better than Tongjilibo/bert-chinese_L-12_H-768_A-12.

Rank the Bert Chinese L 12 H 768 A 12 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41636 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227