ALMA 13B Pretrain AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  ALMA 13B Pretrain AWQ   URL Share it on

  Arxiv:2309.11674   4-bit   Autotrain compatible   Awq Base model:haoranxu/alma-13b-p... Base model:quantized:haoranxu/...   Llama   Quantized   Region:us   Safetensors

ALMA 13B Pretrain AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
ALMA 13B Pretrain AWQ (TheBloke/ALMA-13B-Pretrain-AWQ)

ALMA 13B Pretrain AWQ Parameters and Internals

Model Type 
llama
Use Cases 
Primary Use Cases:
Translation
Additional Notes 
Quantized by TheBloke using AWQ method. AWQ models support faster inference and are efficient for deployment with smaller GPUs.
Supported Languages 
Chinese (high), English (high)
Training Details 
Data Sources:
monolingual data, parallel data
Data Volume:
12B monolingual tokens
Methodology:
Full-weight and LoRA fine-tuning
Input Output 
Input Format:
Translate this from Chinese to English: Chinese: {prompt} English:
Accepted Modalities:
text
Output Format:
text
LLM NameALMA 13B Pretrain AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/ALMA-13B-Pretrain-AWQ 
Model NameALMA 13B Pretrain
Model Creatorhaoranxu
Base Model(s)  ALMA 13B Pretrain   haoranxu/ALMA-13B-Pretrain
Model Size13b
Required VRAM7.2 GB
Updated2025-02-22
MaintainerTheBloke
Model Typellama
Model Files  7.2 GB
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureLlamaForCausalLM
Licensemit
Context Length4096
Model Max Length4096
Transformers Version4.30.0.dev0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat32

Best Alternatives to ALMA 13B Pretrain AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Yarn Llama 2 13B 128K AWQ128K / 7.2 GB1152
LongAlign 13B 64K AWQ64K / 7.2 GB822
...oboros L2 13B 2 1 YaRN 64K AWQ64K / 7.2 GB1172
OrcaMaid V3 13B 32K AWQ32K / 7.2 GB1044
OrcaMaid V2 FIX 13B 32K AWQ32K / 7.2 GB61
NexusRaven V2 13B AWQ16K / 7.2 GB743
NexusRaven V2 13B AWQ16K / 7.2 GB113
...th CodeLlama 13B Python Hf AWQ16K / 7.5 GB80
WhiteRabbitNeo 13B AWQ16K / 7.2 GB274
NexusRaven V2 13B AWQ16K / 7.2 GB841
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/ALMA-13B-Pretrain-AWQ.

Rank the ALMA 13B Pretrain AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227