AquilaChat2 34B 16K AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  AquilaChat2 34B 16K AWQ   URL Share it on

  4-bit   Aquila   Autotrain compatible   Awq Base model:baai/aquilachat2-34... Base model:quantized:baai/aqui...   Custom code   Quantized   Region:us   Safetensors   Sharded   Tensorflow

AquilaChat2 34B 16K AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
AquilaChat2 34B 16K AWQ (TheBloke/AquilaChat2-34B-16K-AWQ)

AquilaChat2 34B 16K AWQ Parameters and Internals

Model Type 
aquila
Input Output 
Input Format:
Human: {prompt} Assistant:
Accepted Modalities:
text
Output Format:
text
Release Notes 
Version:
1.2
Date:
2023-10-25
Notes:
Improved long-text synthesis capabilities, approaching GPT-3.5-16K level. Enhanced performance in non-long-text scenarios through additional conventional instruction fine-tuning corpora.
LLM NameAquilaChat2 34B 16K AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/AquilaChat2-34B-16K-AWQ 
Model NameAquilachat2 34B 16K
Model CreatorBeijing Academy of Artificial Intelligence
Base Model(s)  AquilaChat2 34B 16K   BAAI/AquilaChat2-34B-16K
Model Size34b
Required VRAM19.3 GB
Updated2025-02-15
MaintainerTheBloke
Model Typeaquila
Model Files  10.0 GB: 1-of-2   9.3 GB: 2-of-2
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureAquilaForCausalLM
Licenseother
Context Length4096
Model Max Length4096
Transformers Version4.34.1
Tokenizer ClassGPT2Tokenizer
Vocabulary Size100008
Torch Data Typefloat16

Best Alternatives to AquilaChat2 34B 16K AWQ

Best Alternatives
Context / RAM
Downloads
Likes
AquilaChat2 34B AWQ4K / 19.3 GB121
AquilaChat2 34B 16K16K / 67.2 GB13625
Aquila2 34B8K / 136.4 GB296117
AquilaChat2 34B4K / 67.2 GB22647
AquilaChat2 34B 16K GPTQ4K / 18.7 GB476
AquilaChat2 34B GPTQ4K / 18.7 GB522
H2ogpt 16K Aquilachat2 34B4K / 67.2 GB714
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/AquilaChat2-34B-16K-AWQ.

Rank the AquilaChat2 34B 16K AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43137 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227