LLM Name | 4 |
Repository ๐ค | https://huggingface.co/MrRobotoAI/4 |
Base Model(s) | |
Merged Model | Yes |
Model Size | 8b |
Required VRAM | 16.1 GB |
Updated | 2025-01-09 |
Maintainer | MrRobotoAI |
Model Type | llama |
Model Files | |
Model Architecture | LlamaForCausalLM |
Context Length | 1048576 |
Model Max Length | 1048576 |
Transformers Version | 4.46.2 |
Tokenizer Class | PreTrainedTokenizerFast |
Vocabulary Size | 128256 |
LoRA Model | Yes |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...a 3 8B Instruct Gradient 1048K | 1024K / 16.1 GB | 5133 | 681 |
R2 | 1024K / 16.1 GB | 417 | 0 |
D7 | 1024K / 16.1 GB | 217 | 0 |
A6 | 1024K / 16.1 GB | 152 | 0 |
101 | 1024K / 16.1 GB | 139 | 0 |
A4 | 1024K / 16.1 GB | 138 | 0 |
A1 | 1024K / 16.1 GB | 5 | 0 |
A7 | 1024K / 16.1 GB | 79 | 0 |
MrRoboto ProLong 8B V4i | 1024K / 16.1 GB | 66 | 1 |
D14 | 1024K / 16.1 GB | 120 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐