LLM Name | Reflect Llm8B Om2 Mstlrg 300k460k T12 Llm33 130K T12 SFTt1 Lr1e 6 DPOt1 |
Repository ๐ค | https://huggingface.co/RyanYr/reflect_llm8B_om2-mstlrg-300k460k-t12_llm33-130k-t12_SFTt1-lr1e-6_DPOt1 |
Model Name | reflect_llm8B_om2-mstlrg-300k460k-t12_llm33-130k-t12_SFTt1-lr1e-6_DPOt1 |
Base Model(s) | |
Model Size | 8b |
Required VRAM | 16.1 GB |
Updated | 2025-03-23 |
Maintainer | RyanYr |
Model Type | llama |
Model Files | |
Model Architecture | LlamaForCausalLM |
Context Length | 131072 |
Model Max Length | 131072 |
Transformers Version | 4.45.2 |
Tokenizer Class | PreTrainedTokenizerFast |
Padding Token | [PAD] |
Vocabulary Size | 128257 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...a 3 8B Instruct Gradient 1048K | 1024K / 16.1 GB | 5339 | 682 |
A4 | 1024K / 16.1 GB | 161 | 0 |
A2 | 1024K / 16.1 GB | 153 | 0 |
A18 | 1024K / 16.1 GB | 272 | 0 |
A12 | 1024K / 16.1 GB | 256 | 0 |
A10 | 1024K / 16.1 GB | 254 | 0 |
A1 | 1024K / 16.1 GB | 289 | 0 |
C31 | 1024K / 16.1 GB | 183 | 0 |
B5 | 1024K / 16.1 GB | 147 | 0 |
A15 | 1024K / 16.1 GB | 160 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐