Model Type |
| ||||||||||||
Use Cases |
| ||||||||||||
Additional Notes |
| ||||||||||||
Supported Languages |
| ||||||||||||
Training Details |
| ||||||||||||
Input Output |
| ||||||||||||
Release Notes |
|
LLM Name | LLaMAntino 3 ANITA 8B Inst DPO ITA |
Repository ๐ค | https://huggingface.co/swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA |
Model Creator | Marco Polignano - SWAP Research Group |
Base Model(s) | |
Model Size | 8b |
Required VRAM | 16.1 GB |
Updated | 2024-12-14 |
Maintainer | swap-uniba |
Model Type | llama |
Instruction-Based | Yes |
Model Files | |
Supported Languages | en it |
Model Architecture | LlamaForCausalLM |
License | llama3 |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.40.0.dev0 |
Tokenizer Class | PreTrainedTokenizerFast |
Vocabulary Size | 128256 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...a 3 8B Instruct Gradient 1048K | 1024K / 16.1 GB | 14046 | 677 |
Test V0.7z 8B | 1024K / 16.1 GB | 76 | 0 |
Test V0.6c 8B | 1024K / 16.1 GB | 67 | 0 |
Test V0.6l 8B | 1024K / 16.1 GB | 51 | 0 |
Test V0.7i 8B | 1024K / 16.1 GB | 47 | 0 |
Test V0.6M 8B | 1024K / 16.1 GB | 23 | 0 |
Test V0.7h 8B | 1024K / 16.1 GB | 18 | 0 |
Test V0.6n 8B | 1024K / 16.1 GB | 14 | 0 |
L3.1 Gradient | 1024K / 16.1 GB | 8 | 0 |
...SLERP Gradient1048k OpenBioLLM | 1024K / 16.1 GB | 39 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐