Goliath 120B by alpindale

 ยป  All LLMs  ยป  alpindale  ยป  Goliath 120B   URL Share it on

  Autotrain compatible   Conversational   En   Endpoints compatible   Llama   Merge   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/alpindale/goliath-120b 

Goliath 120B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Goliath 120B (alpindale/goliath-120b)

Goliath 120B Parameters and Internals

Model Type 
auto-regressive causal LM
Additional Notes 
Models used in merging: Xwin and Euryale with specified layer ranges.
Supported Languages 
en (proficient)
Training Details 
Methodology:
Combining 2x finetuned Llama-2 70B into one.
Model Architecture:
Uses layers from Xwin and Euryale models in a specified range.
Input Output 
Input Format:
Both Vicuna and Alpaca can be used for prompting.
LLM NameGoliath 120B
Repository ๐Ÿค—https://huggingface.co/alpindale/goliath-120b 
Model Size120b
Required VRAM235.4 GB
Updated2025-02-05
Maintaineralpindale
Model Typellama
Model Files  9.9 GB: 1-of-24   9.8 GB: 2-of-24   10.0 GB: 3-of-24   9.9 GB: 4-of-24   9.8 GB: 5-of-24   9.8 GB: 6-of-24   9.9 GB: 7-of-24   10.0 GB: 8-of-24   9.9 GB: 9-of-24   9.7 GB: 10-of-24   9.8 GB: 11-of-24   9.9 GB: 12-of-24   9.9 GB: 13-of-24   10.0 GB: 14-of-24   9.9 GB: 15-of-24   9.9 GB: 16-of-24   10.0 GB: 17-of-24   9.8 GB: 18-of-24   10.0 GB: 19-of-24   9.8 GB: 20-of-24   9.8 GB: 21-of-24   9.8 GB: 22-of-24   9.8 GB: 23-of-24   8.3 GB: 24-of-24
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.35.0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Quantized Models of the Goliath 120B

Model
Likes
Downloads
VRAM
Goliath 120B GGUF13615649 GB
Goliath 120B AWQ116161 GB
Goliath 120B GPTQ163559 GB

Best Alternatives to Goliath 120B

Best Alternatives
Context / RAM
Downloads
Likes
Chat Goliath 120B 80K80K / 236.3 GB141
Electric Sheep 120B31K / 238.8 GB50
QueenLiz 120B31K / 240.9 GB113
Miquella 120B31K / 235.6 GB5120
Meta Llama 3 225B Instruct8K / 443.2 GB718
...ma 3 Instruct 120B Cat A Llama8K / 243.9 GB171
...0B Instruct Abliterated Merged8K / 243.7 GB41
Colossus 120b4K / 207 GB41
Mlx Community Goliath 120B4K / 234 GB42
Megamarcoroni 120B4K / 212 GB530
Note: green Score (e.g. "73.2") means that the model is better than alpindale/goliath-120b.

Rank the Goliath 120B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227