Llama 3.3 70B Instruct AutoRound GPTQ 4bit by Satwik11

 ยป  All LLMs  ยป  Satwik11  ยป  Llama 3.3 70B Instruct AutoRound GPTQ 4bit   URL Share it on

  4-bit   4bit   Autoregressive   Autotrain compatible Base model:meta-llama/llama-3.... Base model:quantized:meta-llam...   Conversational Dataset:meta-llama/llama-3.3-7...   Endpoints compatible   Gptq   Instruct   Llama   Quantization   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Llama 3.3 70B Instruct AutoRound GPTQ 4bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3.3 70B Instruct AutoRound GPTQ 4bit (Satwik11/Llama-3.3-70B-Instruct-AutoRound-GPTQ-4bit)

Llama 3.3 70B Instruct AutoRound GPTQ 4bit Parameters and Internals

LLM NameLlama 3.3 70B Instruct AutoRound GPTQ 4bit
Repository ๐Ÿค—https://huggingface.co/Satwik11/Llama-3.3-70B-Instruct-AutoRound-GPTQ-4bit 
Base Model(s)  meta-llama/Llama-3.3-70B-Instruct   meta-llama/Llama-3.3-70B-Instruct
Model Size70b
Required VRAM39.9 GB
Updated2024-12-21
MaintainerSatwik11
Model Typellama
Instruction-BasedYes
Model Files  5.0 GB: 1-of-9   4.9 GB: 2-of-9   4.9 GB: 3-of-9   4.9 GB: 4-of-9   4.9 GB: 5-of-9   4.9 GB: 6-of-9   4.9 GB: 7-of-9   3.4 GB: 8-of-9   2.1 GB: 9-of-9
GPTQ QuantizationYes
Quantization Typegptq|4bit
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length131072
Model Max Length131072
Transformers Version4.47.1
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|finetune_right_pad_id|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama 3.3 70B Instruct AutoRound GPTQ 4bit

Best Alternatives
Context / RAM
Downloads
Likes
...B Instruct AutoRound GPTQ 4bit128K / 39.9 GB22905
...ama 3.1 70B Instruct Gptq 4bit128K / 39.9 GB3544
Meta Llama 3 70B Instruct GPTQ8K / 39.8 GB82716
...erkrautLM 70B Instruct GPTQ 8B8K / 74.4 GB4131
...ama 3 Taiwan 70B Instruct GPTQ8K / 39.8 GB212
Meta Llama 3 70B Instruct GPTQ8K / 39.8 GB45319
...ta Llama 3 70B Instruct Marlin8K / 39.5 GB3766
...g Llama 3 70B Instruct GPTQ 8B8K / 74.4 GB31
...g Llama 3 70B Instruct GPTQ 4B8K / 39.8 GB80
...a Llama 3 70B Instruct GPTQ 8B8K / 74.4 GB90
Note: green Score (e.g. "73.2") means that the model is better than Satwik11/Llama-3.3-70B-Instruct-AutoRound-GPTQ-4bit.

Rank the Llama 3.3 70B Instruct AutoRound GPTQ 4bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217