Llama 3.1 8B MultiReflection Instruct by leafspark

 ยป  All LLMs  ยป  leafspark  ยป  Llama 3.1 8B MultiReflection Instruct   URL Share it on

  4bit   Autotrain compatible Base model:finetune:meta-llama... Base model:meta-llama/llama-3....   Conversational Dataset:leafspark/detailedrefl...   De   En   Endpoints compatible   Es   Fr   Hi   Instruct   It   Llama   Model-index   Peft   Pt   Quantized   Reflection   Region:us   Safetensors   Sharded   Tensorflow   Th   Unsloth

Llama 3.1 8B MultiReflection Instruct Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3.1 8B MultiReflection Instruct (leafspark/Llama-3.1-8B-MultiReflection-Instruct)

Llama 3.1 8B MultiReflection Instruct Parameters and Internals

Model Type 
text-generation
Use Cases 
Considerations:
It's recommended to use at least 16k context due to long response lengths.
Additional Notes 
The model produces verbose and long reasoning responses, offering a detailed step-by-step explanation of topics such as math proofs.
Supported Languages 
en (English), de (German), fr (French), it (Italian), pt (Portuguese), hi (Hindi), es (Spanish), th (Thai)
Training Details 
Data Sources:
leafspark/DetailedReflection-Claude-v3_5-Sonnet
Data Volume:
81 examples, each approximately 3000 tokens
Methodology:
Unsloth approach with LoRA Rank: 128, Packing: enabled, Batch size: 2, Gradient accumulation steps: 4, Epochs: 3, Steps: 30
Context Length:
4096
Training Time:
52.32 minutes
Hardware Used:
Google Colab's free T4
Input Output 
Input Format:
Prompts should be formatted to begin with and utilize nested XML tags for the reasoning process and response generation.
Accepted Modalities:
text
Output Format:
Structured XML format
Performance Tips:
Use recommended sampling parameters (Temperature: 0.15, Min-P: 0.2, Top-K: 50, Top-P: 1, Frequency Penalty: 0.5, Presence Penalty: 0.1) for coherent responses.
LLM NameLlama 3.1 8B MultiReflection Instruct
Repository ๐Ÿค—https://huggingface.co/leafspark/Llama-3.1-8B-MultiReflection-Instruct 
Base Model(s)  meta-llama/Meta-Llama-3.1-8B-Instruct   meta-llama/Meta-Llama-3.1-8B-Instruct
Model Size8b
Required VRAM16.1 GB
Updated2025-01-23
Maintainerleafspark
Model Typellama
Instruction-BasedYes
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Supported Languagesen de fr it pt hi es th
Quantization Type4bit
Model ArchitectureLlamaForCausalLM
Licensellama3.1
Context Length131072
Model Max Length131072
Transformers Version4.44.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|finetune_right_pad_id|>
Vocabulary Size128256
Torch Data Typefloat16

Best Alternatives to Llama 3.1 8B MultiReflection Instruct

Best Alternatives
Context / RAM
Downloads
Likes
...B Instruct Gradient 1048K 4bit1024K / 4.5 GB312
...B Instruct Gradient 1048K 8bit1024K / 8.6 GB251
...truct Gradient 1048K Bpw6 EXL21024K / 6.7 GB92
...truct Gradient 1048K Bpw5 EXL21024K / 5.8 GB90
Llama 3 8B Instruct 1048K 4bit1024K / 4.5 GB825
Llama 3 8B Instruct 1048K 8bit1024K / 8.6 GB2217
... Gradient 1048K 8.0bpw H8 EXL21024K / 8.6 GB83
...ct Gradient 1048K Bpw2.25 EXL21024K / 3.4 GB81
...B Instruct 262k V2 EXL2 8.0bpw256K / 8.4 GB300
Llama 3 8B Instruct 262K 2bit256K / 2.5 GB171

Rank the Llama 3.1 8B MultiReflection Instruct Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41774 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227