LLM Name | Vicuna2 |
Repository ๐ค | https://huggingface.co/chavinlo/vicuna2 |
Model Size | 13b |
Required VRAM | 0 GB |
Updated | 2025-01-20 |
Maintainer | chavinlo |
Model Type | llama |
Model Files | |
Model Architecture | LLaMAForCausalLM |
Model Max Length | 1024 |
Transformers Version | 4.27.0.dev0 |
Tokenizer Class | LLaMATokenizer |
Vocabulary Size | 32001 |
Torch Data Type | float32 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Llm Jp 13B V2.0 | 4K / 27.4 GB | 270 | 15 |
Decapoda Research Llama 13B | 0K / 41 GB | 10 | 0 |
LIMA 13B | 0K / 42 GB | 583 | 1 |
Alpaca 13B | 0K / 52.1 GB | 1136 | 108 |
Llama 13B | 0K / 42 GB | 24 | 1 |
Llama 13B | 0K / 42 GB | 16 | 3 |
... X Alpaca 13B Native 4bit 128g | 0K / 7.9 GB | 748 | 736 |
... X Alpaca 13B Native 4bit 128g | 0K / 8.1 GB | 10 | 2 |
Llama 13B 4bit Hf | 0K / 7 GB | 16 | 2 |
Llama 13B 4bit Gr128 | 0K / 7.5 GB | 9 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐