LLM Name | 157 |
Repository ๐ค | https://huggingface.co/MrRobotoAI/157 |
Base Model(s) | |
Merged Model | Yes |
Model Size | 7b |
Required VRAM | 16.1 GB |
Updated | 2025-03-03 |
Maintainer | MrRobotoAI |
Model Type | llama |
Model Files | |
Model Architecture | LlamaForCausalLM |
Context Length | 1048576 |
Model Max Length | 1048576 |
Transformers Version | 4.48.2 |
Tokenizer Class | PreTrainedTokenizerFast |
Vocabulary Size | 128256 |
LoRA Model | Yes |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
124 | 1024K / 16.1 GB | 93 | 0 |
162 | 1024K / 16.1 GB | 60 | 0 |
101 | 1024K / 16.1 GB | 99 | 0 |
2 Very Sci Fi | 1024K / 16.1 GB | 317 | 0 |
118 | 1024K / 16.1 GB | 15 | 0 |
...1M 1000000ctx AEZAKMI 3 1 1702 | 1024K / 13.5 GB | 14 | 1 |
... Qwen2.5llamaify 7B V23.1 200K | 195K / 15.2 GB | 3049 | 3 |
LlamaStock 8B | 128K / 16.1 GB | 12 | 1 |
SuperNeuralDreadDevil 8B | 128K / 16.1 GB | 58 | 1 |
Yarn Llama 2 7B 128K | 128K / 13.5 GB | 5981 | 39 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐