Model Type |
| ||||||||||||||||||
Use Cases |
| ||||||||||||||||||
Supported Languages |
| ||||||||||||||||||
Training Details |
| ||||||||||||||||||
Input Output |
|
LLM Name | LongAlign 6B 64K |
Repository ๐ค | https://huggingface.co/THUDM/LongAlign-6B-64k |
Model Size | 6b |
Required VRAM | 12.5 GB |
Updated | 2025-02-22 |
Maintainer | THUDM |
Model Type | chatglm |
Model Files | |
Supported Languages | en zh |
Model Architecture | ChatGLMForConditionalGeneration |
License | apache-2.0 |
Transformers Version | 4.33.0 |
Tokenizer Class | ChatGLMTokenizer |
Vocabulary Size | 65024 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
ChatGLM3 6B Theresa | 0K / 12.5 GB | 9 | 1 |
LongAlign 6B 64K Base | 0K / 12.3 GB | 28 | 5 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐