Model Type |
| ||||||||||||
Use Cases |
| ||||||||||||
Additional Notes |
| ||||||||||||
Supported Languages |
| ||||||||||||
Training Details |
| ||||||||||||
Input Output |
| ||||||||||||
Release Notes |
|
LLM Name | Go Bruins V2 GPTQ |
Repository ๐ค | https://huggingface.co/TheBloke/go-bruins-v2-GPTQ |
Model Name | Go Bruins v2 |
Model Creator | Ryan Witzman |
Base Model(s) | |
Model Size | 1.2b |
Required VRAM | 4.2 GB |
Updated | 2025-02-22 |
Maintainer | TheBloke |
Model Type | mistral |
Model Files | |
Supported Languages | en |
GPTQ Quantization | Yes |
Quantization Type | gptq |
Model Architecture | MistralForCausalLM |
License | mit |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.35.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | </s> |
Vocabulary Size | 32000 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
... Finetune 16bit Ver9 Main GPTQ | 32K / 4.2 GB | 11 | 0 |
Dictalm2.0 Instruct GPTQ | 32K / 4.2 GB | 150 | 0 |
Dictalm2.0 GPTQ | 32K / 4.2 GB | 106 | 0 |
Multi Verse Model GPTQ | 32K / 4.2 GB | 109 | 1 |
Turdus GPTQ | 32K / 4.2 GB | 86 | 5 |
Garrulus GPTQ | 32K / 4.2 GB | 43 | 3 |
Phoenix GPTQ | 32K / 4.2 GB | 75 | 1 |
HamSter 0.1 GPTQ | 32K / 4.2 GB | 31 | 2 |
Mistral Ft Optimized 1227 GPTQ | 32K / 4.2 GB | 29 | 2 |
Metis 0.5 GPTQ | 32K / 4.2 GB | 26 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐