Model Type |
| |||||||||||||||
Use Cases |
| |||||||||||||||
Additional Notes |
| |||||||||||||||
Supported Languages |
| |||||||||||||||
Training Details |
| |||||||||||||||
Input Output |
|
LLM Name | GPT Neo X 1.3B Qlora Test |
Repository ๐ค | https://huggingface.co/WilAI/gpt-neo-x-1.3b-qlora-test |
Model Size | 1.3b |
Required VRAM | 0 GB |
Updated | 2025-02-22 |
Maintainer | WilAI |
Model Files | |
Model Architecture | AutoModel |
Is Biased | none |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | attn.attention.k_proj|attn.attention.q_proj|attn.attention.v_proj |
LoRA Alpha | 32 |
LoRA Dropout | 0.05 |
R Param | 8 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Diablo Italian Chatbot 1.3B | 2K / 2.6 GB | 58 | 0 |
Diablo Italian Base 1.3B | 2K / 2.6 GB | 15 | 0 |
Test Discriminator | 0K / 0 GB | 76 | 0 |
Cerebras GPT 1.3B | 0K / 5.4 GB | 2535 | 49 |
...lb 200 Distilled 1.3B Ct2 Int8 | 0K / 1.4 GB | 3007 | 4 |
...pseek Coder 1.3B Instruct GGUF | 0K / 0.6 GB | 35399 | 34 |
Deepseek Coder 1.3B Base GGUF | 0K / 0.6 GB | 4536 | 6 |
...eared LLaMA 1.3B ShareGPT GGUF | 0K / 0.6 GB | 289 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐