LLM Name | Meditron Llama2 7b 58k |
Repository ๐ค | https://huggingface.co/Narednra/Meditron_llama2_7b_58k |
Model Size | 7b |
Required VRAM | 13.5 GB |
Updated | 2024-08-15 |
Maintainer | Narednra |
Model Files | |
Model Architecture | Adapter |
Is Biased | none |
Tokenizer Class | LlamaTokenizer |
Padding Token | <PAD> |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | up_proj|k_proj|q_proj|down_proj|o_proj|gate_proj|v_proj |
LoRA Alpha | 16 |
LoRA Dropout | 0.05 |
R Param | 32 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Qwen Megumin | 0K / 0.1 GB | 13 | 0 |
...s 25 Mistral 7B Irca DPO Pairs | 0K / 0.1 GB | 5 | 0 |
Qwen1.5 7B Chat Sa V0.1 | 0K / 0 GB | 9 | 0 |
Zephyr 7B Ipo 0K 15K I1 | 0K / 0.7 GB | 9 | 0 |
Deepthink Reasoning Adapter | 0K / 0.2 GB | 27 | 8 |
Deepseek Llm 7B Chat Sa V0.1 | 0K / 0 GB | 5 | 0 |
... Days Of Sodom LoRA Mistral 7B | 0K / 0.2 GB | 5 | 0 |
Mistral 7B Instruct Sa V0.1 | 0K / 0 GB | 6 | 0 |
CodeAstra 7B | 0K / 0 GB | 639 | 10 |
...eze Embed Tokens Q V Proj Lora | 0K / 0.1 GB | 4 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐