LLM Name | Phantasor V0.3 137M |
Repository ๐ค | https://huggingface.co/XeTute/Phantasor_V0.3-137M |
Base Model(s) | |
Model Size | 137m |
Required VRAM | 0.5 GB |
Updated | 2025-03-05 |
Maintainer | XeTute |
Model Type | gpt2 |
Model Files | |
Supported Languages | zh en |
Model Architecture | GPT2LMHeadModel |
License | mit |
Model Max Length | 1024 |
Transformers Version | 4.48.2 |
Tokenizer Class | GPT2Tokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 50257 |
Torch Data Type | float32 |
Activation Function | gelu_new |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Gpt2 | 0K / 0.5 GB | 10505415 | 2759 |
Phantasor 137M | 0K / 0.5 GB | 211 | 1 |
Phantasor V0.1 137M | 0K / 0.5 GB | 93 | 1 |
Phantasor V0.2 137M | 0K / 0.5 GB | 73 | 1 |
Gpt2 Auth | 0K / 0.5 GB | 69 | 0 |
My GPT2 | 0K / 0.5 GB | 31 | 0 |
Xuanxuan | 0K / 0.3 GB | 10 | 0 |
Gpt2 Test | 0K / 0.5 GB | 93 | 0 |
...edical Transcription Generator | 0K / 0.5 GB | 26 | 4 |
Gpt2023 | 0K / 0.3 GB | 27 | 18 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐