๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
... Summarize 64K QLoRANET Merged | — | 128K / 4.1 GB | 17 | 0 |
...1 Summarize 64K LoRANET Merged | — | 128K / 14.4 GB | 459 | 0 |
...0.2 Neuron 1x2048 2 Cores 2.18 | — | 32K / GB | 1 | 1 |
...uct V0.2 Neuron 1x2048 2 Cores | — | 32K / GB | 1 | 1 |
Mistral 7B Instruct V0.1 | — | 32K / GB | 97 | 0 |
... V0.2 Seqlen 2048 Bs 1 Cores 2 | — | 32K / GB | 68 | 0 |
...uct V0.1 Neuron 1x2048 2 Cores | — | 32K / GB | 8 | 0 |
Mistral Neuron | — | 32K / GB | 7 | 0 |
...stral 7B Instruct V0.2 Ov Int4 | — | 32K / 0 GB | 5 | 0 |
Sq Mistral 7B Instruct W3 S0 | — | 32K / 3.2 GB | 3 | 1 |
LLM Name | Neural Una Cybertron 7B |
Repository | Open on ๐ค |
Model Size | 7b |
Required VRAM | 14.4 GB |
Updated | 2024-07-01 |
Maintainer | Weyaxi |
Model Type | mistral |
Instruction-Based | Yes |
Model Files | |
Model Architecture | MistralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.35.2 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 32000 |
Initializer Range | 0.02 |
Torch Data Type | bfloat16 |