Goliath LongLORA 120B Rope8 32K Fp16 AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Goliath LongLORA 120B Rope8 32K Fp16 AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq Base model:grimulkan/goliath-l... Base model:quantized:grimulkan...   Fp16   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Goliath LongLORA 120B Rope8 32K Fp16 AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Goliath LongLORA 120B Rope8 32K Fp16 AWQ (TheBloke/Goliath-longLORA-120b-rope8-32k-fp16-AWQ)

Goliath LongLORA 120B Rope8 32K Fp16 AWQ Parameters and Internals

Model Type 
llama
Additional Notes 
This repo contains AWQ model files for inference, with quantized versions available in various formats.
Input Output 
Input Format:
Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response:
LLM NameGoliath LongLORA 120B Rope8 32K Fp16 AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/Goliath-longLORA-120b-rope8-32k-fp16-AWQ 
Model NameGoliath LongLORA 120B Rope8 32K
Model CreatorGrimulkan
Base Model(s)  ...h LongLORA 120B Rope8 32K Fp16   grimulkan/Goliath-longLORA-120b-rope8-32k-fp16
Model Size120b
Required VRAM61.9 GB
Updated2025-02-05
MaintainerTheBloke
Model Typellama
Model Files  9.9 GB: 1-of-7   9.9 GB: 2-of-7   9.9 GB: 3-of-7   10.0 GB: 4-of-7   9.9 GB: 5-of-7   9.9 GB: 6-of-7   2.4 GB: 7-of-7
AWQ QuantizationYes
Quantization Typefp16|awq
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Goliath LongLORA 120B Rope8 32K Fp16 AWQ

Best Alternatives
Context / RAM
Downloads
Likes
MegaDolphin 120B AWQ4K / 63.3 GB1172
DiscoLM 120B AWQ4K / 61.9 GB102
Goliath 120B AWQ4K / 61.9 GB6111
Miquella 120B 3.0bpw H6 EXL231K / 44.8 GB510
Miquella 120B 8.0bpw H8 EXL231K / 118.1 GB43
Miquella 120B 4.0bpw H6 EXL231K / 59.4 GB52
...t 120B Cat A Llama EXL2 5.5bpw8K / 85.3 GB80
...t 120B Cat A Llama EXL2 4.5bpw8K / 70.3 GB71
...h LongLORA 120B Rope8 32K Fp164K / 235.4 GB67
...RA 120B Rope8 32K 6bpw H8 EXL24K / 88.7 GB61
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Goliath-longLORA-120b-rope8-32k-fp16-AWQ.

Rank the Goliath LongLORA 120B Rope8 32K Fp16 AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227