Model Type |
| |||||||
Input Output |
| |||||||
Release Notes |
|
LLM Name | Openchat 3.5 0106 |
Repository ๐ค | https://huggingface.co/openchat/openchat-3.5-0106 |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 14.4 GB |
Updated | 2024-12-14 |
Maintainer | openchat |
Model Type | mistral |
Model Files | |
Model Architecture | MistralForCausalLM |
License | apache-2.0 |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.36.1 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 32002 |
Torch Data Type | bfloat16 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
Openchat 3.5 0106 GGUF | 71 | 2795 | 3 GB |
Openchat 3.5 0106 GGUF | 0 | 81 | 3 GB |
MasherAI 7B V1 GGUF | 0 | 6 | 4 GB |
MasherAI 7B V0.9 GGUF | 0 | 5 | 4 GB |
Openchat 3.5 0106 GGUF | 0 | 111 | 2 GB |
Newton 7B 8.0bpw H8 EXL2 | 1 | 13 | 7 GB |
OpenChat 3.5 0106 GGUF | 2 | 168 | 3 GB |
Openchat 3.5 0106 GPTQ | 7 | 76 | 4 GB |
Openchat 3.5 0106 GPTQ | 1 | 23 | 4 GB |
Openchat 3.5 0106 AWQ | 5 | 35 | 4 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...Nemo Instruct 2407 Abliterated | 1000K / 24.5 GB | 2745 | 9 |
MegaBeam Mistral 7B 512K | 512K / 14.4 GB | 18391 | 47 |
SpydazWeb AI HumanAI RP | 512K / 14.4 GB | 36 | 1 |
SpydazWeb AI HumanAI 002 | 512K / 14.4 GB | 16 | 1 |
...daz Web AI ChatML 512K Project | 512K / 14.5 GB | 12 | 0 |
MegaBeam Mistral 7B 300K | 282K / 14.4 GB | 3348 | 15 |
Hebrew Mistral 7B 200K | 256K / 30 GB | 3386 | 15 |
Astral 256K 7B V2 | 250K / 14.4 GB | 18 | 0 |
Astral 256K 7B | 250K / 14.4 GB | 11 | 0 |
Boptruth Agatha 7B | 128K / 14.4 GB | 609 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐