ALMA 13B Pretrain by haoranxu

 ยป  All LLMs  ยป  haoranxu  ยป  ALMA 13B Pretrain   URL Share it on

  Arxiv:2309.11674   Arxiv:2401.08417   Autotrain compatible Base model:finetune:meta-llama... Base model:meta-llama/llama-2-...   Endpoints compatible   Llama   Pytorch   Region:us   Sharded

ALMA 13B Pretrain Benchmarks

ALMA 13B Pretrain (haoranxu/ALMA-13B-Pretrain)

ALMA 13B Pretrain Parameters and Internals

Model Type 
Translation
Use Cases 
Areas:
Translation, Research
Applications:
Machine Translation
Primary Use Cases:
Translating Chinese to English
Considerations:
For usage with LoRA models.
Additional Notes 
ALMA-R incorporates Contrastive Preference Optimization (CPO) for improved performance.
Supported Languages 
Monolingual Data (Multi-lingual), Parallel Data (High-quality parallel data)
Training Details 
Data Sources:
20B monolingual tokens, high-quality parallel data, triplet preference data
Data Volume:
20B tokens for 7B model and 12B tokens for 13B model
Methodology:
Two-step fine-tuning with monolingual data followed by parallel data; Further optimized with Contrastive Preference Optimization (CPO)
Context Length:
40
Model Architecture:
LLaMA based, fine-tuned with LoRA
Input Output 
Input Format:
Text input
Accepted Modalities:
text
Output Format:
Text output
Performance Tips:
Use LoRA models together with Base Models for intended performance.
Release Notes 
Version:
ALMA-13B-LoRA
Date:
unknown
Notes:
Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then LoRA fine-tune on human-written parallel data
Version:
ALMA-13B-R
Date:
unknown
Notes:
Further LoRA fine-tuning upon ALMA-13B-LoRA with contrastive preference optimization
LLM NameALMA 13B Pretrain
Repository ๐Ÿค—https://huggingface.co/haoranxu/ALMA-13B-Pretrain 
Base Model(s)  Llama 2 13B Hf   meta-llama/Llama-2-13b-hf
Model Size13b
Required VRAM52.1 GB
Updated2025-02-22
Maintainerhaoranxu
Model Typellama
Model Files  10.0 GB: 1-of-6   9.9 GB: 2-of-6   9.9 GB: 3-of-6   9.9 GB: 4-of-6   9.9 GB: 5-of-6   2.5 GB: 6-of-6
Model ArchitectureLlamaForCausalLM
Licensemit
Context Length4096
Model Max Length4096
Transformers Version4.30.0.dev0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat32

Quantized Models of the ALMA 13B Pretrain

Model
Likes
Downloads
VRAM
ALMA 13B Pretrain GGUF122805 GB
ALMA 13B Pretrain AWQ1907 GB
ALMA 13B Pretrain GPTQ1437 GB

Best Alternatives to ALMA 13B Pretrain

Best Alternatives
Context / RAM
Downloads
Likes
Yarn Llama 2 13B 128K128K / 26 GB4968113
Luminaura RP 13B128K / 26 GB270
Agent Llama2 13B 80K80K / 26.4 GB150
Chat Llama2 13B 80K80K / 52.8 GB130
Yarn Llama 2 13B 64K64K / 26 GB719017
LongAlign 13B 64K64K / 26 GB3113
LongAlign 13B 64K Base64K / 26 GB263
Openbuddy Llama2 13B V15p1 64K64K / 26.1 GB134
Openbuddy Llama2 13b64k V1564K / 26.1 GB161
Airoboros L2 13B 2.1 YaRN 64K64K / 26 GB137
Note: green Score (e.g. "73.2") means that the model is better than haoranxu/ALMA-13B-Pretrain.

Rank the ALMA 13B Pretrain Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227