LLM Name | WizardLM Uncensored Falcon 7B GGML |
Repository ๐ค | https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-7B-GGML |
Model Size | 7b |
Required VRAM | 4.1 GB |
Updated | 2024-09-16 |
Maintainer | TheBloke |
Model Type | falcon |
Model Files | |
GGML Quantization | Yes |
Quantization Type | ggml |
Model Architecture | AutoModel |
License | other |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Llama2 7b Chat Uncensored GGML | 0K / 2.9 GB | 38 | 115 |
...sMegaCoder Llama2 7B Mini GGML | 0K / 3 GB | 1 | 4 |
Llama 2 7B Chat GGML | 0K / 2.9 GB | 2849 | 843 |
Llama 2 GGML Medical Chatbot | 0K / GB | 35 | 16 |
Llama 2 7B GGML | 0K / 2.9 GB | 150 | 216 |
CodeLlama 7B GGML | 0K / 3 GB | 6 | 27 |
CodeLlama 7B Python GGML | 0K / 2.9 GB | 18 | 23 |
CodeLlama 7B Instruct GGML | 0K / 3 GB | 15 | 20 |
Yarn Llama 2 7B 128K GGML | 0K / 2.9 GB | 2 | 6 |
Yarn Llama 2 7B 64K GGML | 0K / 2.9 GB | 0 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐