๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
MixTAO 7Bx2 MoE V8.1 GGUF | 68 | 0K / 4.8 GB | 746 | 10 |
FusionNet 34Bx2 MoE GGUF | 67.6 | 0K / 22.4 GB | 270 | 5 |
...AO 7Bx2 MoE Instruct V7.0 GGUF | 67.1 | 0K / 4.8 GB | 335 | 10 |
Helion 4x34B GGUF | 66.2 | 0K / 41.5 GB | 32 | 3 |
Cosmosis 3x34B GGUF | 66.1 | 0K / 31.9 GB | 83 | 6 |
...Top 5x7B Instruct S5 V0.1 GGUF | 65.9 | 0K / 2.7 GB | 171 | 1 |
...eTop 5x7B Instruct T V0.1 GGUF | 65.7 | 0K / 2.7 GB | 179 | 0 |
...Top 5x7B Instruct S4 V0.1 GGUF | 65.7 | 0K / 2.7 GB | 173 | 0 |
Quantum DPO V0.1 GGUF | 65.7 | 0K / 3.1 GB | 232 | 1 |
Quantum V0.01 GGUF | 65.5 | 0K / 3.1 GB | 235 | 2 |
LLM Name | Go Bruins V2.1.1 GGUF |
Repository | Open on ๐ค |
Model Name | Go Bruins v2.1.1 |
Model Creator | ryan witzman |
Base Model(s) | |
Required VRAM | 3.1 GB |
Updated | 2024-07-01 |
Maintainer | TheBloke |
Model Type | mistral |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | AutoModel |
License | cc |