Additional Notes | This is a merge of pre-trained language models. | ||
Training Details |
|
LLM Name | Llama 3 8B Instruct MergeSLERP Gradient1048k OpenBioLLM |
Repository ๐ค | https://huggingface.co/lighteternal/Llama-3-8B-Instruct-MergeSLERP-Gradient1048k-OpenBioLLM |
Base Model(s) | |
Merged Model | Yes |
Model Size | 8b |
Required VRAM | 16.1 GB |
Updated | 2024-11-09 |
Maintainer | lighteternal |
Model Type | llama |
Instruction-Based | Yes |
Model Files | |
Model Architecture | LlamaForCausalLM |
License | llama3 |
Context Length | 1048576 |
Model Max Length | 1048576 |
Transformers Version | 4.41.0 |
Tokenizer Class | PreTrainedTokenizerFast |
Vocabulary Size | 128256 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...a 3 8B Instruct Gradient 1048K | 1024K / 16.1 GB | 12568 | 672 |
L3.1 Gradient | 1024K / 16.1 GB | 10 | 0 |
...lama3 8B Special Dark V3.1.2aa | 1024K / 16.1 GB | 13 | 0 |
Llama3 8B Special Dark V3.1.2B | 1024K / 16.1 GB | 12 | 0 |
...lama3 8B Special Dark V3.1.1yy | 1024K / 16.1 GB | 14 | 0 |
Loki | 1024K / 16.1 GB | 9 | 0 |
Unholy Thoth 8B V2 | 1024K / 16.1 GB | 12 | 0 |
...struct Gradient 1048K MAC Lora | 1024K / 5.9 GB | 13 | 2 |
... Instruct Gradient 1048K Agent | 1024K / 16.1 GB | 131 | 1 |
... V0.1.0 Llama 3 8B Instruct 1M | 1024K / 16.1 GB | 34 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐