LLM Name | Llma3 Manydata Not Our Data Rope |
Repository ๐ค | https://huggingface.co/kawagoshi-llm-team/llma3_manydata_not_our_data_rope |
Model Size | 12b |
Required VRAM | 24.2 GB |
Updated | 2025-02-05 |
Maintainer | kawagoshi-llm-team |
Model Type | llama |
Model Files | |
Model Architecture | LlamaForCausalLM |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.36.2 |
Tokenizer Class | PreTrainedTokenizerFast |
Padding Token | <pad> |
Vocabulary Size | 100096 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
OpenCrystal 12B L3.1 128K | 128K / 23 GB | 43 | 3 |
OpenCrystal 12B L3 | 8K / 23 GB | 9 | 14 |
Llama3 12B | 8K / 23.1 GB | 12 | 1 |
IxChel L3 12B | 8K / 23 GB | 10 | 2 |
Ursidae 12B Mini | 8K / 23 GB | 20 | 3 |
Llama 3 Kor BCCard 12B | 8K / 23.3 GB | 0 | 0 |
YuLan Base 12B | 4K / 23.8 GB | 13 | 3 |
YuLan Chat 3 12B | 4K / 23.8 GB | 7 | 3 |
Llma3 Manydata Our Data Rope | 2K / 24.2 GB | 5 | 0 |
Llama3 Sft Many Chat | 2K / 24.2 GB | 6 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐