Finetuned Mistral On Ads by not-lain

 ยป  All LLMs  ยป  not-lain  ยป  Finetuned Mistral On Ads   URL Share it on

  Adapter Base model:adapter:mistralai/m... Base model:mistralai/mistral-7...   Finetuned   Generated from trainer   Instruct   Lora   Peft   Region:us   Safetensors   Sft   Tensorboard   Trl

Finetuned Mistral On Ads Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Finetuned Mistral On Ads Parameters and Internals

LLM NameFinetuned Mistral On Ads
Repository ๐Ÿค—https://huggingface.co/not-lain/finetuned_mistral_on_ads 
Base Model(s)  mistralai/Mistral-7B-Instruct-v0.3   mistralai/Mistral-7B-Instruct-v0.3
Model Size7b
Required VRAM0 GB
Updated2024-08-15
Maintainernot-lain
Instruction-BasedYes
Model Files  0.0 GB   0.0 GB
Model ArchitectureAdapter
Licenseapache-2.0
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token</s>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Moduleso_proj|gate_proj|v_proj|k_proj|q_proj|down_proj|up_proj
LoRA Alpha8
LoRA Dropout0
R Param8
Finetuned Mistral On Ads (not-lain/finetuned_mistral_on_ads)

Quantized Models of the Finetuned Mistral On Ads

Model
Likes
Downloads
VRAM
...stral 7B Instruct V0.3 GPTQ 8B0367 GB
Mistral 7B Instruct V0.3 GPTQ064 GB
...stral 7B Instruct V0.3 GPTQ 4B054 GB

Best Alternatives to Finetuned Mistral On Ads

Best Alternatives
Context / RAM
Downloads
Likes
...Sql Flash Attention 2 Dataeval0K / 1.9 GB3182
Lora Adapted Mistral 7B0K / 0 GB500
...82 6142 45d8 9455 Bc68ca4866eb0K / 1.2 GB60
...al 7B Instruct V0.3 17193012560K / 0.9 GB90
Text To Rule Mistral 20K / 0.3 GB60
...al 7B Instruct V0.3 17192465050K / 0 GB90
...al 7B Instruct V0.3 17192977500K / 0.4 GB60
Text To Rule Mistral0K / 0.4 GB60
...al 7B Instruct V0.3 17193413250K / 1.7 GB60
...al 7B Instruct V0.3 17193505280K / 0.9 GB50
Note: green Score (e.g. "73.2") means that the model is better than not-lain/finetuned_mistral_on_ads.

Rank the Finetuned Mistral On Ads Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 35926 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024072803