Llama 3 8B 1M PoSE by winglian

 ยป  All LLMs  ยป  winglian  ยป  Llama 3 8B 1M PoSE   URL Share it on

  Arxiv:2309.10400   Autotrain compatible   Axolotl   En   Endpoints compatible   Facebook   Llama   Llama-3   Meta   Pytorch   Region:us   Safetensors   Sharded   Tensorflow

Llama 3 8B 1M PoSE Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3 8B 1M PoSE (winglian/llama-3-8b-1m-PoSE)

Llama 3 8B 1M PoSE Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
commercial, research
Applications:
assistant-like chat, natural language generation tasks
Limitations:
Out-of-scope for violating laws, Use in languages other than English
Considerations:
Fine-tuning for languages beyond English is allowed under specific licenses
Additional Notes 
The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on industry benchmarks.
Supported Languages 
en (high)
Training Details 
Data Sources:
publicly available online data, SlimPajama
Data Volume:
15 trillion tokens
Methodology:
Auto-regressive language model with optimized transformer architecture
Context Length:
8000
Hardware Used:
H100-80GB (TDP of 700W)
Model Architecture:
Transformer with Grouped-Query Attention (GQA)
Safety Evaluation 
Methodologies:
red teaming, adversarial evaluations
Risk Categories:
misinformation, bias
Responsible Ai Considerations 
Mitigation Strategies:
safety tools and evaluations (e.g., Purple Llama)
Input Output 
Input Format:
text
Accepted Modalities:
text
Output Format:
text and code
LLM NameLlama 3 8B 1M PoSE
Repository ๐Ÿค—https://huggingface.co/winglian/llama-3-8b-1m-PoSE 
Model Size8b
Required VRAM16.1 GB
Updated2025-05-01
Maintainerwinglian
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Context Length1048576
Model Max Length1048576
Transformers Version4.40.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama 3 8B 1M PoSE

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB5052104
UltraLong Thinking4192K / 16.1 GB832
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB129915
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB410939
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
....1 1million Ctx Dark Planet 8B1048K / 32.3 GB322
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB20842679
F61024K / 16.1 GB780
Note: green Score (e.g. "73.2") means that the model is better than winglian/llama-3-8b-1m-PoSE.

Rank the Llama 3 8B 1M PoSE Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 46943 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227