LLM Name | Jamba 8xMoE Slerp |
Repository | Open on ๐ค |
Model Size | 29b |
Required VRAM | 115.7 GB |
Updated | 2024-07-27 |
Maintainer | isemmanuelolowe |
Model Type | jamba |
Model Files | |
Model Architecture | JambaForCausalLM |
License | mit |
Transformers Version | 4.40.0.dev0 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <|pad|> |
Vocabulary Size | 65536 |
Torch Data Type | float32 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐