Nucleus 22B Token 500B AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Nucleus 22B Token 500B AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq Base model:nucleusai/nucleus-2... Base model:quantized:nucleusai...   En   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Nucleus 22B Token 500B AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Nucleus 22B Token 500B AWQ (TheBloke/nucleus-22B-token-500B-AWQ)

Nucleus 22B Token 500B AWQ Parameters and Internals

Model Type 
causal decoder-only
Additional Notes 
Model is quantized by TheBloke using low-bit weight quantization method, AWQ.
Supported Languages 
en (high)
Training Details 
Data Sources:
RefinedWeb-English, Books, Code, Technical, Math
Data Volume:
500B tokens
Context Length:
2048
Training Time:
about two weeks
Hardware Used:
256 A100 80GB GPUs
Input Output 
Input Format:
{prompt}
LLM NameNucleus 22B Token 500B AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/nucleus-22B-token-500B-AWQ 
Model NameNucleus 22B Token 500B
Model CreatorNucleusAI
Base Model(s)  Nucleus 22B Token 500B   NucleusAI/nucleus-22B-token-500B
Model Size22b
Required VRAM12 GB
Updated2025-02-05
MaintainerTheBloke
Model Typellama
Model Files  10.0 GB: 1-of-2   2.0 GB: 2-of-2
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureLlamaForCausalLM
Licensemit
Context Length2048
Model Max Length2048
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Nucleus 22B Token 500B AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Llama2 22B Daydreamer V3 AWQ4K / 12 GB102
Dendrite 22Bchk2 F164K / 43.7 GB101
Calm3 22B Chat16K / 44.9 GB549872
Calm3 22B RP V216K / 44.9 GB13911
Calm3 22B RP V0.116K / 44.9 GB50
Yousei 22B4K / 44.5 GB13262
Llama2 22B Daydreamer V34K / 43.7 GB130011
Platypus 2 22B Relora4K / 43.7 GB12681
Llama2 22B4K / 43.7 GB130346
Llama2 22B Blocktriangular4K / 43.7 GB13664
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/nucleus-22B-token-500B-AWQ.

Rank the Nucleus 22B Token 500B AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42625 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227