EZO Qwen2.5 72B Instruct by AXCXEPT

 ยป  All LLMs  ยป  AXCXEPT  ยป  EZO Qwen2.5 72B Instruct   URL Share it on

Base model:finetune:qwen/qwen2... Base model:qwen/qwen2.5-72b-in...   Chat   Conversational   En   Instruct   Ja   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

EZO Qwen2.5 72B Instruct Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
EZO Qwen2.5 72B Instruct (AXCXEPT/EZO-Qwen2.5-72B-Instruct)

EZO Qwen2.5 72B Instruct Parameters and Internals

Model Type 
text-generation, chat
Use Cases 
Areas:
global language applications
Supported Languages 
ja (high proficiency), en (supported)
Training Details 
Data Sources:
Japanese Wikipedia, FineWeb
Methodology:
plain instruction tuning
Training Time:
32h
Hardware Used:
A100 x 4
LLM NameEZO Qwen2.5 72B Instruct
Repository ๐Ÿค—https://huggingface.co/AXCXEPT/EZO-Qwen2.5-72B-Instruct 
Base Model(s)  Qwen/Qwen2.5-72B-Instruct   Qwen/Qwen2.5-72B-Instruct
Model Size72b
Required VRAM190.3 GB
Updated2024-12-22
MaintainerAXCXEPT
Model Typeqwen2
Instruction-BasedYes
Model Files  5.0 GB: 1-of-9   4.5 GB: 1-of-31   5.0 GB: 2-of-9   5.0 GB: 2-of-31   4.9 GB: 3-of-9   4.8 GB: 3-of-31   4.9 GB: 4-of-9   4.8 GB: 4-of-31   4.9 GB: 5-of-9   4.8 GB: 5-of-31   4.9 GB: 6-of-9   5.0 GB: 6-of-31   4.9 GB: 7-of-9   4.8 GB: 7-of-31   4.9 GB: 8-of-9   4.8 GB: 8-of-31   4.9 GB: 9-of-9   4.8 GB: 9-of-31   5.0 GB: 10-of-31   4.8 GB: 11-of-31   4.8 GB: 12-of-31   4.8 GB: 13-of-31   5.0 GB: 14-of-31   4.8 GB: 15-of-31   4.8 GB: 16-of-31   4.8 GB: 17-of-31   5.0 GB: 18-of-31   4.8 GB: 19-of-31   4.8 GB: 20-of-31   4.8 GB: 21-of-31   5.0 GB: 22-of-31   4.8 GB: 23-of-31   4.8 GB: 24-of-31   4.8 GB: 25-of-31   5.0 GB: 26-of-31   4.8 GB: 27-of-31   4.8 GB: 28-of-31   4.8 GB: 29-of-31   3.2 GB: 30-of-31   2.5 GB: 31-of-31
Supported Languagesja en
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.44.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Quantized Models of the EZO Qwen2.5 72B Instruct

Model
Likes
Downloads
VRAM
...CoTRAG Qwen2.5 72B Instruct Q488344 GB

Best Alternatives to EZO Qwen2.5 72B Instruct

Best Alternatives
Context / RAM
Downloads
Likes
EVA Qwen2.5 72B V0.2128K / 146 GB123210
...n2.5 72B 2x Instruct TIES V1.0128K / 146.1 GB231
72B Qwen2.5 Kunou V1128K / 146 GB55518
EVA Qwen2.5 72B V0.1128K / 146 GB64613
EVA Qwen2.5 72B V0.0128K / 146 GB1415
Athene V2 Chat32K / 146 GB7224244
Qwen2.5 72B Instruct32K / 145.5 GB231550616
Qwen2 72B Instruct32K / 145.5 GB40816689
Rombos LLM V2.5 Qwen 72B32K / 146.1 GB297030
Magnum V1 72B32K / 146 GB5499162
Note: green Score (e.g. "73.2") means that the model is better than AXCXEPT/EZO-Qwen2.5-72B-Instruct.

Rank the EZO Qwen2.5 72B Instruct Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217