Model Type |
| ||||||||||||
Use Cases |
| ||||||||||||
Additional Notes |
| ||||||||||||
Supported Languages |
| ||||||||||||
Training Details |
| ||||||||||||
Input Output |
|
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
Qwen2 72B Instruct AWQ | 37 | 17790 | 41 GB |
Qwen2 72B Instruct GPTQ Int8 | 14 | 10708 | 77 GB |
Qwen2 72B Instruct GPTQ Int4 | 31 | 5186 | 41 GB |
Qwen2 72B Instruct GGUF | 0 | 168 | 13 GB |
Qwen2 72B Instruct GGUF | 0 | 145 | 13 GB |
Qwen2 72B Instruct 4bit | 2 | 15 | 40 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
EVA Qwen2.5 72B V0.2 | 128K / 146 GB | 1192 | 10 |
...n2.5 72B 2x Instruct TIES V1.0 | 128K / 146.1 GB | 22 | 1 |
72B Qwen2.5 Kunou V1 | 128K / 146 GB | 531 | 18 |
EVA Qwen2.5 72B V0.1 | 128K / 146 GB | 650 | 13 |
EVA Qwen2.5 72B V0.0 | 128K / 146 GB | 144 | 5 |
Athene V2 Chat | 32K / 146 GB | 7048 | 244 |
Qwen2.5 72B Instruct | 32K / 145.5 GB | 269638 | 615 |
Rombos LLM V2.5 Qwen 72B | 32K / 146.1 GB | 3031 | 30 |
Magnum V1 72B | 32K / 146 GB | 5569 | 162 |
RYS XLarge | 32K / 156.3 GB | 2790 | 79 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐