LLM Name | Replit Openorca |
Repository ๐ค | https://huggingface.co/matorus/replit-openorca |
Required VRAM | 5.2 GB |
Updated | 2024-10-18 |
Maintainer | matorus |
Model Type | mpt |
Model Files | |
Model Architecture | MPTForCausalLM |
Model Max Length | 2048 |
Transformers Version | 4.30.2 |
Tokenizer Class | ReplitLMTokenizer |
Padding Token | <|pad|> |
Vocabulary Size | 32768 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Tiny Mpt Random Remote Code | 0K / 0 GB | 19134 | 0 |
WangchanLion7B | 0K / 29.8 GB | 90 | 5 |
Replit Code Instruct Glaive | 0K / 10.4 GB | 8 | 88 |
Results Sharded Bf16 5GB | 0K / 13.4 GB | 5 | 0 |
Replit Coder | 0K / 5.2 GB | 9 | 0 |
Replit Leetcode | 0K / 5.2 GB | 5 | 0 |
Gpt4all Mpt | 0K / 26.6 GB | 9 | 10 |
Mpt Mini Shakespeare | 0K / 0 GB | 14 | 1 |
PhoGPT 7B5 GGUF | 0K / 17 GB | 22 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐