LLM Name | Fimbulvetr 11B Attention V0.1 Test |
Repository ๐ค | https://huggingface.co/TheHierophant/Fimbulvetr-11B-Attention-V0.1-test |
Base Model(s) | |
Merged Model | Yes |
Model Size | 11b |
Required VRAM | 21.4 GB |
Updated | 2024-12-06 |
Maintainer | TheHierophant |
Model Type | llama |
Model Files | |
Model Architecture | LlamaForCausalLM |
Context Length | 4096 |
Model Max Length | 4096 |
Transformers Version | 4.46.2 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 32000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
MIstral 11B Omni OP U1k Ver0.1 | 32K / 21.4 GB | 1973 | 0 |
...ral 11B Omni OP 1K 2048 Ver0.1 | 32K / 21.4 GB | 1971 | 0 |
Llama 3 Synatra 11B V1 20K | 20K / 23 GB | 19 | 9 |
Fimbulvetr 11B V2.1 16K | 16K / 21.4 GB | 33 | 17 |
Moistral 11B V2 | 8K / 21.4 GB | 59 | 21 |
Moistral 11B V3 | 8K / 21.4 GB | 547 | 91 |
Narumashi 11B V0.9 | 8K / 21.4 GB | 54 | 1 |
Moistral 11B V5d E4 | 8K / 21.4 GB | 15 | 1 |
Moistral 11B V5a | 8K / 21.4 GB | 20 | 1 |
Moistral 11B V5b | 8K / 21.4 GB | 12 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐