Additional Notes |
|
LLM Name | SnowyRP V2 13B L2 BetaTest Q4 K M GGUF |
Repository ๐ค | https://huggingface.co/Clevyby/SnowyRP-V2-13B-L2_BetaTest-Q4_K_M-GGUF |
Base Model(s) | |
Model Size | 13b |
Required VRAM | 7.9 GB |
Updated | 2025-02-05 |
Maintainer | Clevyby |
Model Type | llama |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | fp16|gguf|q4 |
Model Architecture | LlamaForCausalLM |
Context Length | 4096 |
Model Max Length | 4096 |
Transformers Version | 4.37.2 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 32000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Llm Compiler 13B GGUF | 16K / 4.8 GB | 41 | 0 |
Llm Compiler 13B Ftd GGUF | 16K / 4.8 GB | 37 | 0 |
Llm Compiler 13B Ftd GGUF | 16K / 4.8 GB | 25 | 0 |
Llm Compiler 13B GGUF | 16K / 4.8 GB | 14 | 0 |
CodeLlama 13B Instruct GGUF | 16K / 5.4 GB | 185 | 2 |
Luminia 13B V3 | 4K / 26 GB | 41 | 5 |
DiarizationLM 13B Fisher V1 | 4K / 26 GB | 189 | 12 |
Mythomax L2 13B Q4 K M GGUF | 4K / 8.1 GB | 120 | 1 |
HyperLlama2Test | 4K / 26 GB | 5 | 0 |
AppleSauce L2 13B | 4K / 26.7 GB | 1297 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐