Llama 3 8B Instruct Gradient 1048K 8.0bpw H8 EXL2 by LoneStriker

 ยป  All LLMs  ยป  LoneStriker  ยป  Llama 3 8B Instruct Gradient 1048K 8.0bpw H8 EXL2   URL Share it on

  Arxiv:2309.00071   Arxiv:2402.08268   8-bit   Autotrain compatible   Conversational   En   Endpoints compatible   Exl2   Instruct   Llama   Llama-3   Meta   Quantized   Region:us   Safetensors

Llama 3 8B Instruct Gradient 1048K 8.0bpw H8 EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

Llama 3 8B Instruct Gradient 1048K 8.0bpw H8 EXL2 Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
Commercial, Research
Applications:
Chatbot, Text generation
Primary Use Cases:
Instruction tuned models for assistant-like tasks
Limitations:
Use outside English laid out by Acceptable Use Policy
Considerations:
Developers can fine-tune models for non-English languages adhering to license policy
Supported Languages 
en (English)
Training Details 
Data Sources:
SlimPajama
Data Volume:
15 trillion tokens of pretraining data
Methodology:
Progressive training on increasing context lengths, NTK-aware interpolation to initialize RoPE theta
Context Length:
1048000
Training Time:
529 hours
Hardware Used:
Crusoe Energy high performance L40S cluster (GPU), Meta's Research SuperCluster (H100-80GB GPUs)
Model Architecture:
Optimized transformer architecture using NTK-aware interpolation and RoPE theta optimization
Safety Evaluation 
Methodologies:
Red teaming, Adversarial tests
Findings:
Residual risks are minimized, focus on limiting false refusals and maintaining model helpfulness
Risk Categories:
Cybersecurity risks, Child safety risks, CBRNE hazards
Ethical Considerations:
Transparency, rapid feedback loops, community collaboration for safety
Responsible Ai Considerations 
Fairness:
Model designed to be helpful and unbiased across different use cases
Transparency:
Open approach with community feedback to ensure improvements in safety and efficiency
Accountability:
Meta ensures accountability through detailed Responsible Use Guide and community interactions
Mitigation Strategies:
Deployment of Meta Llama Guard 2 and Code Shield safeguards
Input Output 
Input Format:
Text input
Accepted Modalities:
text
Output Format:
Text generation
Performance Tips:
Optimize inputs for long context handling utilizing model's capability
Release Notes 
Version:
8B-Instruct
Date:
April 18, 2024
Notes:
Extended context, improved training efficiency with long contexts
LLM NameLlama 3 8B Instruct Gradient 1048K 8.0bpw H8 EXL2
Repository ๐Ÿค—https://huggingface.co/LoneStriker/Llama-3-8B-Instruct-Gradient-1048k-8.0bpw-h8-exl2 
Model Size8b
Required VRAM8.6 GB
Updated2024-12-02
MaintainerLoneStriker
Model Typellama
Instruction-BasedYes
Model Files  8.6 GB
Supported Languagesen
Quantization Typeexl2
Model ArchitectureLlamaForCausalLM
Licensellama3
Context Length1048576
Model Max Length1048576
Transformers Version4.39.1
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typebfloat16
Llama 3 8B Instruct Gradient 1048K 8.0bpw H8 EXL2 (LoneStriker/Llama-3-8B-Instruct-Gradient-1048k-8.0bpw-h8-exl2)

Best Alternatives to Llama 3 8B Instruct Gradient 1048K 8.0bpw H8 EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...B Instruct Gradient 1048K 4bit1024K / 4.5 GB72
...B Instruct Gradient 1048K 8bit1024K / 8.6 GB61
...truct Gradient 1048K Bpw6 EXL21024K / 6.7 GB72
...truct Gradient 1048K Bpw5 EXL21024K / 5.8 GB50
Llama 3 8B Instruct 1048K 4bit1024K / 4.5 GB1225
Llama 3 8B Instruct 1048K 8bit1024K / 8.6 GB2017
...ct Gradient 1048K Bpw2.25 EXL21024K / 3.4 GB61
...B Instruct 262k V2 EXL2 6.0bpw256K / 6.7 GB1001
Llama 3 8B Instruct 262K 2bit256K / 2.5 GB41
...B Instruct 262k V2 EXL2 5.0bpw256K / 5.8 GB41
Note: green Score (e.g. "73.2") means that the model is better than LoneStriker/Llama-3-8B-Instruct-Gradient-1048k-8.0bpw-h8-exl2.

Rank the Llama 3 8B Instruct Gradient 1048K 8.0bpw H8 EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 38765 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124