Model Type |
| ||||||||||||||||||
Use Cases |
| ||||||||||||||||||
Supported Languages |
| ||||||||||||||||||
Training Details |
| ||||||||||||||||||
Input Output |
|
LLM Name | ChatLM |
Repository ๐ค | https://huggingface.co/ayoolaolafenwa/ChatLM |
Required VRAM | 5.2 GB |
Updated | 2025-02-22 |
Maintainer | ayoolaolafenwa |
Model Type | falcon |
Model Files | |
Supported Languages | en |
Model Architecture | FalconForCausalLM |
License | apache-2.0 |
Model Max Length | 1024 |
Transformers Version | 4.27.4 |
Is Biased | 1 |
Tokenizer Class | GPT2Tokenizer |
Vocabulary Size | 50304 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Really Tiny Falcon Testing | 2K / 0 GB | 47 | 1 |
Tiny Random FalconForCausalLM | 0.5K / 0 GB | 15795 | 0 |
Try2 Deploy Falcon | 0K / 13.8 GB | 5 | 0 |
Lince Zero | 0K / 13.8 GB | 214 | 47 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐