Llama3.1 Aloe Beta 8B by HPAI-BSC

 ยป  All LLMs  ยป  HPAI-BSC  ยป  Llama3.1 Aloe Beta 8B   URL Share it on

  Merged Model   Arxiv:2405.01886   Autotrain compatible   Biology Dataset:hpai-bsc/aloe-beta-gen... Dataset:hpai-bsc/chain-of-diag... Dataset:hpai-bsc/headqa-cot-ll... Dataset:hpai-bsc/medmcqa-cot-l... Dataset:hpai-bsc/medqa-cot-lla...   Dataset:hpai-bsc/meds-ins Dataset:hpai-bsc/mmlu-medical-...   Dataset:hpai-bsc/polymed-qa Dataset:hpai-bsc/pubmedqa-cot-...   Dataset:hpai-bsc/ultramedical   En   Endpoints compatible   Healthcare   Instruct   Llama   Medical   Region:us   Safetensors   Sharded   Tensorflow

Llama3.1 Aloe Beta 8B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama3.1 Aloe Beta 8B (HPAI-BSC/Llama3.1-Aloe-Beta-8B)

Llama3.1 Aloe Beta 8B Parameters and Internals

Model Type 
Causal decoder-only transformer language model
Supported Languages 
English (capable but not formally evaluated on other languages)
Training Details 
Data Sources:
HPAI-BSC/Aloe-Beta-General-Collection, HPAI-BSC/chain-of-diagnosis, HPAI-BSC/MedS-Ins, HPAI-BSC/ultramedical, HPAI-BSC/pubmedqa-cot-llama31, HPAI-BSC/medqa-cot-llama31, HPAI-BSC/medmcqa-cot-llama31, HPAI-BSC/headqa-cot-llama31, HPAI-BSC/MMLU-medical-cot-llama31, HPAI-BSC/Polymed-QA
Data Volume:
1.8B tokens
Methodology:
Fine-tuning on Llama 3.1 using axolotl. Deepspeed Zero-3 distributed training.
Context Length:
16384
Hardware Used:
32x NVIDIA Hopper H100 64GB for 8B, 64x NVIDIA Hopper H100 64GB for 70B
LLM NameLlama3.1 Aloe Beta 8B
Repository ๐Ÿค—https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B 
Merged ModelYes
Model Size8b
Required VRAM16.1 GB
Updated2025-04-30
MaintainerHPAI-BSC
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensellama3.1
Context Length131072
Model Max Length131072
Transformers Version4.46.1
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|finetune_right_pad_id|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama3.1 Aloe Beta 8B

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB4363104
UltraLong Thinking4192K / 16.1 GB732
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB129215
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB407140
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
....1 1million Ctx Dark Planet 8B1048K / 32.3 GB312
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB19316679
B71024K / 16.1 GB1200
Note: green Score (e.g. "73.2") means that the model is better than HPAI-BSC/Llama3.1-Aloe-Beta-8B.

Rank the Llama3.1 Aloe Beta 8B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 46859 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227