Meta Llama 3 70B Instruct GPTQ Int8 by study-hjt

 ยป  All LLMs  ยป  study-hjt  ยป  Meta Llama 3 70B Instruct GPTQ Int8   URL Share it on

  8-bit   Autotrain compatible   En   Endpoints compatible   Facebook   Gptq   Instruct   Int8   Llama   Llama-3   Llama3   Meta   Pytorch   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Meta Llama 3 70B Instruct GPTQ Int8 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Meta Llama 3 70B Instruct GPTQ Int8 (study-hjt/Meta-Llama-3-70B-Instruct-GPTQ-Int8)

Meta Llama 3 70B Instruct GPTQ Int8 Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
commercial, research
Applications:
dialogue, chat assistants, general text generation
Primary Use Cases:
assistant-like chat
Limitations:
Limited to English by default; other languages require compliance with licensing terms.
Considerations:
Developers encouraged to follow safety guidelines and apply community feedback.
Additional Notes 
Advised to leverage community resources and contribute feedback for model improvements.
Training Details 
Data Sources:
publicly available online data
Data Volume:
15T+ tokens
Methodology:
Model uses supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF)
Context Length:
8000
Training Time:
7.7M GPU hours
Hardware Used:
H100-80GB GPUs
Model Architecture:
auto-regressive transformer architecture
Safety Evaluation 
Methodologies:
extensive red teaming exercises, adversarial evaluations
Findings:
Conducted extensive mitigation techniques to lower residual risks.
Risk Categories:
misinformation, bias, ethical considerations
Ethical Considerations:
Advised to conduct safety testing tailored to specific applications.
Responsible Ai Considerations 
Fairness:
Efforts made to reduce bias during model training.
Transparency:
Open source and community driven, with guidelines for responsible use.
Accountability:
Developers are responsible for safe use aligned with policy.
Mitigation Strategies:
Guidelines and tools provided for safe deployment, including Meta Llama Guard 2 and Code Shield.
Input Output 
Input Format:
Text input only
Accepted Modalities:
text
Output Format:
Generates text and code
Performance Tips:
Utilize quantization options for efficiency in resource use.
Release Notes 
Version:
3.0
Date:
April 18, 2024
Notes:
Initial release of Meta Llama 3 models, including advancements in safety and performance.
LLM NameMeta Llama 3 70B Instruct GPTQ Int8
Repository ๐Ÿค—https://huggingface.co/study-hjt/Meta-Llama-3-70B-Instruct-GPTQ-Int8 
Base Model(s)  ...a Llama 3 70B Instruct Ov Int4   fakezeta/Meta-Llama-3-70B-Instruct-ov-int4
Model Size70b
Required VRAM74.5 GB
Updated2024-12-22
Maintainerstudy-hjt
Model Typellama
Instruction-BasedYes
Model Files  10.0 GB: 1-of-8   9.8 GB: 2-of-8   9.9 GB: 3-of-8   9.9 GB: 4-of-8   10.0 GB: 5-of-8   10.0 GB: 6-of-8   9.9 GB: 7-of-8   5.0 GB: 8-of-8
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length8192
Model Max Length8192
Transformers Version4.39.3
Tokenizer ClassLlamaTokenizer
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typefloat16

Best Alternatives to Meta Llama 3 70B Instruct GPTQ Int8

Best Alternatives
Context / RAM
Downloads
Likes
...B Instruct AutoRound GPTQ 4bit128K / 39.9 GB22905
...B Instruct AutoRound GPTQ 4bit128K / 39.9 GB522
...ama 3.1 70B Instruct Gptq 4bit128K / 39.9 GB3544
Meta Llama 3 70B Instruct GPTQ8K / 39.8 GB82716
...erkrautLM 70B Instruct GPTQ 8B8K / 74.4 GB4151
Meta Llama 3 70B Instruct GPTQ8K / 39.8 GB47019
...ama 3 Taiwan 70B Instruct GPTQ8K / 39.8 GB212
...ta Llama 3 70B Instruct Marlin8K / 39.5 GB3766
...g Llama 3 70B Instruct GPTQ 4B8K / 39.8 GB80
...g Llama 3 70B Instruct GPTQ 8B8K / 74.4 GB41
Note: green Score (e.g. "73.2") means that the model is better than study-hjt/Meta-Llama-3-70B-Instruct-GPTQ-Int8.

Rank the Meta Llama 3 70B Instruct GPTQ Int8 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217