Supported Languages |
| |||
Training Details |
|
LLM Name | Mark1 Revision 10.7B |
Repository ๐ค | https://huggingface.co/DopeorNope/Mark1-revision-10.7B |
Model Size | 10.7b |
Required VRAM | 42.9 GB |
Updated | 2024-12-22 |
Maintainer | DopeorNope |
Model Type | mistral |
Model Files | |
Supported Languages | ko |
Model Architecture | MistralForCausalLM |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.36.0.dev0 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 32000 |
Torch Data Type | float32 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
PiVoT 10.7B Mistral V0.2 | 32K / 21.4 GB | 5154 | 5 |
Barely Regal 10.7B | 32K / 21.5 GB | 19 | 0 |
Quintellect 10.7B | 32K / 21.4 GB | 90 | 3 |
Chikuma 10.7B | 32K / 21.4 GB | 301 | 5 |
Multi Verse Model 10.7B | 32K / 21.5 GB | 17 | 1 |
PrimaSumika 10.7B 128K | 32K / 21.5 GB | 26 | 1 |
Longcat 10.7B | 32K / 21.5 GB | 23 | 3 |
PiVoT 10.7B Mistral V0.2 RP | 32K / 21.4 GB | 720 | 7 |
... Loyal Mistral Maid 32K V0.2 A | 32K / 21.5 GB | 29 | 1 |
Chikuma 10.7B V2 | 32K / 21.4 GB | 222 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐