LLM Name | Medguanaco Lora 65B GPTQ |
Repository ๐ค | https://huggingface.co/nmitchko/medguanaco-lora-65b-GPTQ |
Model Size | 65b |
Required VRAM | 0.3 GB |
Updated | 2024-09-18 |
Maintainer | nmitchko |
Model Files | |
Supported Languages | en |
GPTQ Quantization | Yes |
Quantization Type | gptq |
Model Architecture | AutoModel |
License | cc |
Is Biased | none |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | q_proj|v_proj |
LoRA Alpha | 64 |
LoRA Dropout | 0.05 |
R Param | 32 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...en Instruct Human Mix 65B GGUF | 0K / 27 GB | 50 | 1 |
LLaMA 65B GGUF | 0K / 27 GB | 270 | 5 |
...stage Llama1 65B Instruct GGUF | 0K / 27 GB | 214 | 1 |
Guanaco 65B GGUF | 0K / 27 GB | 134 | 5 |
Airoboros 65B GPT4 2.0 GGML | 0K / 27.5 GB | 0 | 2 |
Airoboros 65B GPT4 M2.0 GGML | 0K / 27.5 GB | 0 | 2 |
...stage Llama1 65B Instruct GGML | 0K / 27.5 GB | 6 | 5 |
LLaMa 65B GGML | 0K / 27.3 GB | 0 | 25 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐