Llama3 8B Cpt Sea Lionv2 Base by aisingapore

 ยป  All LLMs  ยป  aisingapore  ยป  Llama3 8B Cpt Sea Lionv2 Base   URL Share it on

  Arxiv:2101.09635   Arxiv:2309.06085   Autotrain compatible Base model:finetune:meta-llama... Base model:meta-llama/meta-lla...   En   Endpoints compatible   Id   Instruct   Llama   Region:us   Safetensors   Sharded   Ta   Tensorflow   Th   Vi

Llama3 8B Cpt Sea Lionv2 Base Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama3 8B Cpt Sea Lionv2 Base (aisingapore/llama3-8b-cpt-sea-lionv2-base)

Llama3 8B Cpt Sea Lionv2 Base Parameters and Internals

Model Type 
Decoder
Additional Notes 
The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures.
Supported Languages 
en (English), id (Indonesian), th (Thai), vi (Vietnamese), ta (Tamil)
Training Details 
Data Sources:
Dolma RefinedWeb - English, Dolma C4 - English, Dolma Reddit - English, Dolma Semantic Scholar, Dolma arXiv, Dolma StarCoder, SEA-LION Pile - Indonesian, Wiki* - Indonesian, SEA-LION Pile - Tamil, Wiki* + News - Tamil, SEA-LION Pile - Thai, WangChanBERTa - Thai, Wiki* - Thai, SEA-LION Pile - Vietnamese, Wiki* - Vietnamese
Data Volume:
48 billion tokens
Methodology:
Continued pre-training from Meta-Llama-3-8B-Instruct
Training Time:
2 days
Hardware Used:
AWS EC2 p5d.24xlarge instances - 8, Nvidia H100 80GB GPU - 64
LLM NameLlama3 8B Cpt Sea Lionv2 Base
Repository ๐Ÿค—https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2-base 
Base Model(s)  Meta Llama 3 8B Instruct   meta-llama/Meta-Llama-3-8B-Instruct
Model Size8b
Required VRAM16.1 GB
Updated2025-03-24
Maintaineraisingapore
Model Typellama
Instruction-BasedYes
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Supported Languagesen id ta th vi
Model ArchitectureLlamaForCausalLM
Licensellama3
Context Length8192
Model Max Length8192
Transformers Version4.42.3
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama3 8B Cpt Sea Lionv2 Base

Best Alternatives
Context / RAM
Downloads
Likes
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB5272682
P1024K / 16.1 GB1190
931024K / 16.1 GB840
1171024K / 16.1 GB330
Mpasila Viking 8B1024K / 16.1 GB840
93a1024K / 16.1 GB200
161024K / 16.1 GB1690
1261024K / 16.1 GB130
1361024K / 16.1 GB130
1331024K / 16.1 GB120
Note: green Score (e.g. "73.2") means that the model is better than aisingapore/llama3-8b-cpt-sea-lionv2-base.

Rank the Llama3 8B Cpt Sea Lionv2 Base Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 45494 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227