Llama 2 7B Pruned70 Retrained by neuralmagic

 ยป  All LLMs  ยป  neuralmagic  ยป  Llama 2 7B Pruned70 Retrained   URL Share it on

  Arxiv:1905.07830   Arxiv:1907.10641   Arxiv:1911.01547   Arxiv:2009.03300   Arxiv:2107.03374   Arxiv:2109.07958   Arxiv:2110.14168   Arxiv:2301.00774   Arxiv:2405.03594   Autotrain compatible Base model:finetune:neuralmagi... Base model:neuralmagic/llama-2... Dataset:cerebras/slimpajama-62...   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Sparse   Tensorflow

Llama 2 7B Pruned70 Retrained Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 2 7B Pruned70 Retrained (neuralmagic/Llama-2-7b-pruned70-retrained)

Llama 2 7B Pruned70 Retrained Parameters and Internals

Model Type 
text-generation
Additional Notes 
This model was pruned and retrained, leveraging SparseGPT methodology for maintaining sparsity and efficient transfer learning.
Training Details 
Data Sources:
Cerebras' SlimPajama dataset
Data Volume:
100B tokens (retraining after 50% pruning), followed by another 100B tokens post 70% pruning
Methodology:
One-shot pruning with SparseGPT followed by retraining to maintain sparsity.
LLM NameLlama 2 7B Pruned70 Retrained
Repository ๐Ÿค—https://huggingface.co/neuralmagic/Llama-2-7b-pruned70-retrained 
Base Model(s)  Llama 2 7B Pruned50 Retrained   neuralmagic/Llama-2-7b-pruned50-retrained
Model Size7b
Required VRAM13.5 GB
Updated2025-02-22
Maintainerneuralmagic
Model Typellama
Model Files  4.9 GB: 1-of-3   5.0 GB: 2-of-3   3.6 GB: 3-of-3
Model ArchitectureLlamaForCausalLM
Context Length4096
Model Max Length4096
Transformers Version4.40.0
Tokenizer ClassLlamaTokenizerFast
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Llama 2 7B Pruned70 Retrained

Best Alternatives
Context / RAM
Downloads
Likes
2 Very Sci Fi1024K / 16.1 GB3170
...1M 1000000ctx AEZAKMI 3 1 17021024K / 13.5 GB231
... Qwen2.5llamaify 7B V23.1 200K195K / 15.2 GB39433
LlamaStock 8B128K / 16.1 GB111
SuperNeuralDreadDevil 8B128K / 16.1 GB541
Yarn Llama 2 7B 128K128K / 13.5 GB642239
LLaMA 7B PoSE YaRN 128K128K / 13.5 GB73
LLaMA 7B PoSE Linear 96K96K / 27 GB92
LLaMA 7B PoSE YaRN 96K96K / 13.5 GB111
Chat Llama2 7B 80K80K / 13.8 GB80
Note: green Score (e.g. "73.2") means that the model is better than neuralmagic/Llama-2-7b-pruned70-retrained.

Rank the Llama 2 7B Pruned70 Retrained Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227