LLM Name | Qwen2p5 14B Distill Wip |
Repository ๐ค | https://huggingface.co/chargoddard/qwen2p5-14b-distill-wip |
Model Size | 14b |
Required VRAM | 29.7 GB |
Updated | 2024-09-26 |
Maintainer | chargoddard |
Model Type | qwen2 |
Instruction-Based | Yes |
Model Files | |
Model Architecture | Qwen2ForCausalLM |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.44.2 |
Tokenizer Class | Qwen2Tokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 152064 |
Torch Data Type | bfloat16 |
Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
EVA Qwen2.5 14B V0.0 | 128K / 29.7 GB | 177 | 13 |
EVA Qwen2.5 14B V0.1 | 128K / 29.7 GB | 102 | 5 |
Qwen2.5 14B Instruct | 32K / 29.6 GB | 73447 | 69 |
Replete LLM V2.5 Qwen 14B | 32K / 29.7 GB | 437 | 14 |
Qwen2.5 14B Gutenberg 1e Delta | 32K / 29.7 GB | 997 | 4 |
Qwen2.5 14B Instruct | 32K / 29.7 GB | 368 | 6 |
NightyGurps 14B V1.1 | 32K / 29.7 GB | 7 | 7 |
Qwen2p5 14B Llamatok | 32K / 29.2 GB | 19 | 0 |
Sailor 14B Chat | 32K / 28.4 GB | 51 | 11 |
...wen2.5 14B Uncensored Instruct | 128K / 29.7 GB | 7 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐