LLM Name | Xddd |
Repository ๐ค | https://huggingface.co/hghghgkskdmskdms/xddd |
Model Size | 3.2b |
Required VRAM | 6.5 GB |
Updated | 2025-06-02 |
Maintainer | hghghgkskdmskdms |
Model Type | llama |
Model Files | |
Model Architecture | LlamaForCausalLM |
Context Length | 131072 |
Model Max Length | 131072 |
Transformers Version | 4.48.3 |
Tokenizer Class | PreTrainedTokenizerFast |
Padding Token | <|finetune_right_pad_id|> |
Vocabulary Size | 128256 |
Torch Data Type | float16 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
Xddd Mlx 2Bit | 0 | 7 | 1 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Llama3.2 Resized | 128K / 6.5 GB | 432 | 0 |
Rs44 | 128K / 6.5 GB | 582 | 0 |
BG58 | 128K / 6.5 GB | 7 | 0 |
ColdBrew Oracle | 128K / 6.5 GB | 11 | 0 |
ISLEXITAS WEL SHREE 02 12 24 | 128K / 6.5 GB | 15 | 0 |
Testing V01 | 128K / 6.5 GB | 20 | 0 |
Q448 | 128K / 6.5 GB | 419 | 0 |
...uvery Train 0.0.2 Merged 16bit | 128K / 6.5 GB | 9 | 0 |
TaraV2 M16B | 128K / 6.5 GB | 15 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐