Sparse Llama 3.1 8B 2of4 by neuralmagic

 ยป  All LLMs  ยป  neuralmagic  ยป  Sparse Llama 3.1 8B 2of4   URL Share it on

  Arxiv:2301.00774   Arxiv:2310.06927 Base model:finetune:meta-llama... Base model:meta-llama/llama-3....   Region:us   Safetensors   Sharded   Sparsity   Tensorflow   Vllm

Sparse Llama 3.1 8B 2of4 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Sparse Llama 3.1 8B 2of4 (neuralmagic/Sparse-Llama-3.1-8B-2of4)

Sparse Llama 3.1 8B 2of4 Parameters and Internals

Model Type 
text-generation
Additional Notes 
This sparse model utilizes advanced techniques to maintain high accuracy while enhancing efficiency.
Training Details 
Data Volume:
13B tokens
Methodology:
SparseGPT with LLM-Compressor and SquareHead knowledge distillation
Model Architecture:
Llama-3.1-8B
Input Output 
Input Format:
Text
Accepted Modalities:
Text
Output Format:
Text
Release Notes 
Version:
1.0
Date:
11/20/2024
Notes:
Release of the 2:4 sparse version of Llama-3.1-8B, focusing on sparsity and optimized training methods for efficiency.
LLM NameSparse Llama 3.1 8B 2of4
Repository ๐Ÿค—https://huggingface.co/neuralmagic/Sparse-Llama-3.1-8B-2of4 
Base Model(s)  meta-llama/Llama-3.1-8B   meta-llama/Llama-3.1-8B
Model Size8b
Required VRAM16.1 GB
Updated2025-03-24
Maintainerneuralmagic
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Model ArchitectureLlamaForCausalLM
Licensellama3.1
Context Length131072
Model Max Length131072
Transformers Version4.45.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Sparse Llama 3.1 8B 2of4

Best Alternatives
Context / RAM
Downloads
Likes
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB5272682
A61024K / 16.1 GB3860
A81024K / 16.1 GB2840
A41024K / 16.1 GB3630
A21024K / 16.1 GB3600
A181024K / 16.1 GB2720
A101024K / 16.1 GB3030
A121024K / 16.1 GB2560
A11024K / 16.1 GB3080
C311024K / 16.1 GB1830
Note: green Score (e.g. "73.2") means that the model is better than neuralmagic/Sparse-Llama-3.1-8B-2of4.

Rank the Sparse Llama 3.1 8B 2of4 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 45494 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227