LLM Name | Qwen 7B Distill Reasoner |
Repository ๐ค | https://huggingface.co/prithivMLmods/Qwen-7B-Distill-Reasoner |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 15.2 GB |
Updated | 2025-03-14 |
Maintainer | prithivMLmods |
Model Type | qwen2 |
Model Files | |
Supported Languages | en |
Model Architecture | Qwen2ForCausalLM |
License | apache-2.0 |
Context Length | 131072 |
Model Max Length | 131072 |
Transformers Version | 4.47.1 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <|vision_pad|> |
Vocabulary Size | 152064 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Qwen2.5 7B Instruct 1M | 986K / 15.4 GB | 333128 | 267 |
Hush Qwen2.5 7B Preview | 986K / 15.2 GB | 84 | 0 |
Hush Qwen2.5 7B RP V1.4 1M | 986K / 15.2 GB | 36 | 2 |
Hush Qwen2.5 7B V1.1 | 986K / 15.2 GB | 28 | 1 |
Hush Qwen2.5 7B V1.3 | 986K / 15.2 GB | 21 | 2 |
Hush Qwen2.5 7B V1.4 | 986K / 15.2 GB | 24 | 1 |
Hush Qwen2.5 7B V1.2 | 986K / 15.2 GB | 21 | 1 |
Qwen2.5 7B Preview | 986K / 15.2 GB | 19 | 0 |
Qwen2.5 7B RRP 1M | 986K / 15.2 GB | 206 | 6 |
Qwen2.5 7B MixStock V0.1 | 986K / 15.2 GB | 154 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐