ALMA 7B Ja by webbigdata

 ยป  All LLMs  ยป  webbigdata  ยป  ALMA 7B Ja   URL Share it on

  Arxiv:2309.11674   Autotrain compatible   Cs   De   En   Is   Ja   Llama   Pytorch   Region:us   Safetensors   Sharded   Tensorflow   Zh
Model Card on HF ๐Ÿค—: https://huggingface.co/webbigdata/ALMA-7B-Ja 

ALMA 7B Ja Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
ALMA 7B Ja (webbigdata/ALMA-7B-Ja)

ALMA 7B Ja Parameters and Internals

Model Type 
machine translation
Use Cases 
Areas:
research, commercial applications
Applications:
machine translation
Primary Use Cases:
Japanese to English translation
Limitations:
Translations for languages other than Japanese and English have deteriorated, especially in quantized versions.
Considerations:
For languages other than Japanese and English, it is recommended to use the original ALMA-13B model.
Additional Notes 
Original ALMA-7B model supports English and Russian translations, whereas the ALMA-7B-Ja version shifts focus to Japanese and English.
Supported Languages 
languages_supported_and_proficiency (:{), Japanese (ja) (High), English (en) (High), German (de) (Moderate), Chinese (zh) (Moderate), Icelandic (is) (Moderate), Czech (cs) (Moderate), Russian (ru) (Limited)
Training Details 
Data Sources:
monolingual data, high-quality parallel data
Methodology:
Two-step fine-tuning: initial fine-tuning on monolingual data followed by optimization with parallel data.
Release Notes 
Version:
2024/03/04
Date:
2024/03/04
Notes:
C3TR-Adapter version released.
Version:
2023/10/21
Date:
2023/10/21
Notes:
Performance of ALMA-7B-Ja-V2 improved.
LLM NameALMA 7B Ja
Repository ๐Ÿค—https://huggingface.co/webbigdata/ALMA-7B-Ja 
Model Size7b
Required VRAM13.5 GB
Updated2025-02-22
Maintainerwebbigdata
Model Typellama
Model Files  10.0 GB: 1-of-2   3.5 GB: 2-of-2   10.0 GB: 1-of-2   3.5 GB: 2-of-2
Supported Languagesja en de is zh cs
Model ArchitectureLlamaForCausalLM
Context Length4096
Model Max Length4096
Transformers Version4.34.0
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to ALMA 7B Ja

Best Alternatives
Context / RAM
Downloads
Likes
2 Very Sci Fi1024K / 16.1 GB3170
...1M 1000000ctx AEZAKMI 3 1 17021024K / 13.5 GB231
... Qwen2.5llamaify 7B V23.1 200K195K / 15.2 GB39433
LlamaStock 8B128K / 16.1 GB111
SuperNeuralDreadDevil 8B128K / 16.1 GB541
Yarn Llama 2 7B 128K128K / 13.5 GB642239
LLaMA 7B PoSE YaRN 128K128K / 13.5 GB73
LLaMA 7B PoSE Linear 96K96K / 27 GB92
LLaMA 7B PoSE YaRN 96K96K / 13.5 GB111
Chat Llama2 7B 80K80K / 13.8 GB80
Note: green Score (e.g. "73.2") means that the model is better than webbigdata/ALMA-7B-Ja.

Rank the ALMA 7B Ja Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227