Llama 7B Finnish Instruct V0.2 En CMP TR Size 304 Epochs 10 2024 06 23 06 24 07 3558633 by vdavidr

 ยป  All LLMs  ยป  vdavidr  ยป  Llama 7B Finnish Instruct V0.2 En CMP TR Size 304 Epochs 10 2024 06 23 06 24 07 3558633   URL Share it on

  Adapter Base model:adapter:finnish-nlp... Base model:finnish-nlp/llama-7...   Finetuned   Generated from trainer   Instruct   Lora   Peft   Region:us   Safetensors   Tensorboard

Llama 7B Finnish Instruct V0.2 En CMP TR Size 304 Epochs 10 2024 06 23 06 24 07 3558633 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 7B Finnish Instruct V0.2 En CMP TR Size 304 Epochs 10 2024 06 23 06 24 07 3558633 (vdavidr/llama-7b-finnish-instruct-v0.2_En__CMP_TR_size_304_epochs_10_2024-06-23_06-24-07_3558633)

Llama 7B Finnish Instruct V0.2 En CMP TR Size 304 Epochs 10 2024 06 23 06 24 07 3558633 Parameters and Internals

Model Type 
fine-tuned model
LLM NameLlama 7B Finnish Instruct V0.2 En CMP TR Size 304 Epochs 10 2024 06 23 06 24 07 3558633
Repository ๐Ÿค—https://huggingface.co/vdavidr/llama-7b-finnish-instruct-v0.2_En__CMP_TR_size_304_epochs_10_2024-06-23_06-24-07_3558633 
Base Model(s)  Finnish-NLP/llama-7b-finnish-instruct-v0.2   Finnish-NLP/llama-7b-finnish-instruct-v0.2
Model Size7b
Required VRAM1.1 GB
Updated2025-04-05
Maintainervdavidr
Instruction-BasedYes
Model Files  1.1 GB   0.0 GB
Model ArchitectureAdapter
Licenseapache-2.0
Model Max Length2048
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token<|loppu|>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesk_proj|o_proj|q_proj|v_proj|up_proj|gate_proj|down_proj
LoRA Alpha16
LoRA Dropout0.1
R Param16

Best Alternatives to Llama 7B Finnish Instruct V0.2 En CMP TR Size 304 Epochs 10 2024 06 23 06 24 07 3558633

Best Alternatives
Context / RAM
Downloads
Likes
Qwen Megumin0K / 0.1 GB21
Deepthink Reasoning Adapter0K / 0.2 GB192
Mistral 7B Instruct Sa V0.10K / 0 GB60
Qwen2.5 7b NotesCorrector0K / 0.6 GB100
...82 6142 45d8 9455 Bc68ca4866eb0K / 1.2 GB60
Text To Rule Mistral 20K / 0.3 GB60
...Sql Flash Attention 2 Dataeval0K / 1.9 GB53
...al 7B Instruct V0.3 17193012560K / 0.9 GB90
...al 7B Instruct V0.3 17192977500K / 0.4 GB60
Text To Rule Mistral0K / 0.4 GB60
Note: green Score (e.g. "73.2") means that the model is better than vdavidr/llama-7b-finnish-instruct-v0.2_En__CMP_TR_size_304_epochs_10_2024-06-23_06-24-07_3558633.

Rank the Llama 7B Finnish Instruct V0.2 En CMP TR Size 304 Epochs 10 2024 06 23 06 24 07 3558633 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 46490 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227