Gogpt 560M by golaxy

 ยป  All LLMs  ยป  golaxy  ยป  Gogpt 560M   URL Share it on

  Autotrain compatible   Bloom Dataset:bellegroup/school math... Dataset:bellegroup/train 0.5m ...   Dataset:bellegroup/train 1m cn   Dataset:bellegroup/train 2m cn Dataset:bellegroup/train 3.5m ...   Endpoints compatible   Pytorch   Region:us   Zh
Model Card on HF ๐Ÿค—: https://huggingface.co/golaxy/gogpt-560m 

Gogpt 560M Benchmarks

๐ŸŒŸ Advertise your project ๐Ÿš€

Gogpt 560M Parameters and Internals

Model Type 
Instruction-tuned
Use Cases 
Areas:
Research, Commercial applications
Limitations:
Limited by the quality of input datasets, Focus on Chinese instructions
Additional Notes 
Model is tailored for Chinese instructions using diverse and high-quality datasets.
Supported Languages 
zh (fluent)
Training Details 
Data Sources:
BelleGroup/train_2M_CN, BelleGroup/train_3.5M_CN, BelleGroup/train_1M_CN, BelleGroup/train_0.5M_CN, BelleGroup/school_math_0.25M
Methodology:
Instruction tuning on BLOOM using Chinese datasets
LLM NameGogpt 560M
Repository ๐Ÿค—https://huggingface.co/golaxy/gogpt-560m 
Model Size560m
Required VRAM1 GB
Updated2024-12-04
Maintainergolaxy
Model Typebloom
Model Files  1.0 GB   2.2 GB   0.0 GB
Supported Languageszh
Model ArchitectureBloomForCausalLM
Licenseapache-2.0
Model Max Length2048
Transformers Version4.29.1
Tokenizer ClassBloomTokenizer
Padding Token<pad>
Vocabulary Size250880
Torch Data Typefloat32
Gogpt 560M (golaxy/gogpt-560m)

Best Alternatives to Gogpt 560M

Best Alternatives
Context / RAM
Downloads
Likes
Train Test Bloom5600K / 2.2 GB80
Bloomz 560M0K / 1.1 GB14897077109
Promt Generator0K / 2.2 GB117017
Train Test0K / 2.2 GB300
Product Description Fr0K / 2.2 GB100
Guitester0K / 2.2 GB70
Bloomz 560M Sft Chat0K / 1.1 GB143510
ModeloAJustadoBloom10K / 2.2 GB60
Bloom 560M RLHF V20K / 1.1 GB14283
Bloom 560M RLHF0K / 1.1 GB14281
Note: green Score (e.g. "73.2") means that the model is better than golaxy/gogpt-560m.

Rank the Gogpt 560M Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 38813 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124