Supported Languages |
|
LLM Name | Mambamerd |
Repository ๐ค | https://huggingface.co/Ammartatox/mambamerd |
Model Size | 1.4b |
Required VRAM | 5.8 GB |
Updated | 2024-07-04 |
Maintainer | Ammartatox |
Model Type | mamba |
Model Files | |
Supported Languages | en |
Model Architecture | MambaForCausalLM |
License | apache-2.0 |
Transformers Version | 4.41.2 |
Tokenizer Class | GPTNeoXTokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 50280 |
Torch Data Type | float32 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Mamba 1.4B Hf | 0K / 5.5 GB | 4448 | 10 |
Mambamerd | 0K / 5.8 GB | 6 | 0 |
Mamba 1.4B Instruct Hf | 0K / 5.5 GB | 96 | 0 |
Ofm Mamba 1.4B Lambda Hf | 0K / 5.5 GB | 54 | 1 |
Mamba 1.4B | 0K / 2.8 GB | 126 | 0 |
Mamba 1.4B Ru | 0K / 5.3 GB | 14 | 5 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐