LLM Name | Qwen2.5 Coder Scholar 7B Abliterated MFANN |
Repository ๐ค | https://huggingface.co/netcat420/Qwen2.5-Coder-Scholar-7B-Abliterated-MFANN |
Model Size | 7b |
Required VRAM | 30.5 GB |
Updated | 2025-01-17 |
Maintainer | netcat420 |
Model Type | qwen2 |
Instruction-Based | Yes |
Model Files | |
Generates Code | Yes |
Model Architecture | Qwen2ForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.47.1 |
Tokenizer Class | Qwen2Tokenizer |
Padding Token | <|im_end|> |
Vocabulary Size | 152064 |
Torch Data Type | float32 |
Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
StockQwen 2.5 7B | 128K / 15.2 GB | 19 | 3 |
Qwen2.5 Coder 7B Instruct | 32K / 15.2 GB | 112569 | 383 |
Dria Agent A 7B | 32K / 15.2 GB | 127 | 31 |
Qwen2.5 Coder 7B Instruct Ties | 32K / 15.2 GB | 151 | 0 |
Qwen2.5 Coder 7B Instruct | 32K / 15.2 GB | 5502 | 4 |
Qwen Bolt Coder | 32K / 15.2 GB | 18 | 1 |
MilkDropLM 7B V0.3 | 32K / 15.2 GB | 19 | 12 |
Rombos Coder V2.5 Qwen 7B | 32K / 15.2 GB | 129 | 3 |
...en2 Coder7b Reflct Adamw Iter1 | 32K / 15.2 GB | 91 | 0 |
Arch Function 7B | 32K / 15.2 GB | 157 | 11 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐