Llama 3 8B Merged Linear by vhab10

 ยป  All LLMs  ยป  vhab10  ยป  Llama 3 8B Merged Linear   URL Share it on

  Merged Model   4-bit   Autotrain compatible Base model:meta-llama/meta-lla... Base model:quantized:meta-llam...   Bitsandbytes   Conversational   En   Endpoints compatible   Llama   Model-merging   Region:us   Safetensors   Sharded   Tensorflow

Llama 3 8B Merged Linear Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3 8B Merged Linear (vhab10/llama-3-8b-merged-linear)

Llama 3 8B Merged Linear Parameters and Internals

Model Type 
text generation, model merging
Additional Notes 
This model is a linear merge of three Llama 3-8b models using Mergekit tool.
Supported Languages 
English (multilingual, including English, Italian, and potentially others)
LLM NameLlama 3 8B Merged Linear
Repository ๐Ÿค—https://huggingface.co/vhab10/llama-3-8b-merged-linear 
Base Model(s)  Meta Llama 3 8B   meta-llama/Meta-Llama-3-8B
Merged ModelYes
Model Size8b
Required VRAM5.8 GB
Updated2025-06-01
Maintainervhab10
Model Typellama
Model Files  4.7 GB: 1-of-2   1.1 GB: 2-of-2
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensemit
Context Length8192
Model Max Length8192
Transformers Version4.44.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typefloat16

Best Alternatives to Llama 3 8B Merged Linear

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB3482107
UltraLong Thinking4192K / 16.1 GB3122
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB41815
Zero Llama 3.1 8B Beta61048K / 16.1 GB7301
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB179443
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
...xis Bookwriter Llama3.1 8B Sft1048K / 16.1 GB534
...dger Nu Llama 3.1 8B UltraLong1048K / 16.2 GB573
Note: green Score (e.g. "73.2") means that the model is better than vhab10/llama-3-8b-merged-linear.

Rank the Llama 3 8B Merged Linear Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 47770 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227