Training Details |
|
LLM Name | Mistral 7B V0.1 Emotion |
Repository ๐ค | https://huggingface.co/frankmorales2020/Mistral-7B-v0.1_Emotion |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 1.3โฏGB |
Updated | 2025-03-01 |
Maintainer | frankmorales2020 |
Instruction-Based | Yes |
Model Files | |
Model Architecture | Adapter |
License | apache-2.0 |
Is Biased | none |
Tokenizer Class | LlamaTokenizer |
Padding Token | <unk> |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | up_proj|v_proj|down_proj|k_proj|o_proj|q_proj|gate_proj |
LoRA Alpha | 128 |
LoRA Dropout | 0.05 |
R Param | 256 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Qwen Megumin | 0K / 0.1โGB | 13 | 0 |
Deepthink Reasoning Adapter | 0K / 0.2โGB | 18 | 11 |
Mistral 7B Instruct Sa V0.1 | 0K / 0โGB | 6 | 0 |
...Sql Flash Attention 2 Dataeval | 0K / 1.9โGB | 34 | 3 |
...82 6142 45d8 9455 Bc68ca4866eb | 0K / 1.2โGB | 5 | 0 |
Text To Rule Mistral 2 | 0K / 0.3โGB | 6 | 0 |
...al 7B Instruct V0.3 1719301256 | 0K / 0.9โGB | 9 | 0 |
...10 2024 06 23 06 24 07 3558633 | 0K / 1.1โGB | 16 | 0 |
...al 7B Instruct V0.3 1719297750 | 0K / 0.4โGB | 6 | 0 |
Text To Rule Mistral | 0K / 0.4โGB | 6 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐